dar-2.7.17/0000755000175000017520000000000014767510034007370 500000000000000dar-2.7.17/AUTHORS0000644000175000017520000001024714767500073010366 00000000000000 D i s k A R c h i v e - D A R --------- ----- - Original Design & Development: Denis Corbin Several patches are from Wesley Leggette, Dave Vasilevsky, Nick Alcock, Brian May, Martin Jost, Jurgen Menden, Todd Vierling Omer Enbar, David Rose, Alex Kohlmeyer, Dietrich Rothe, Moritz Franosch, John Little, Chris Martin, Michael Roitzsch, Andrea Palazzi, Dwayne C. Litzenberger, Erik Wasser, Sonni Norlov, David Fries, Jan-Pascal van Best. Translations of program messages are from: Peter Landgren for Swedish Markus Kamp for German Denis Corbin for French http://dar.linux.free.fr/ (main site) http://dar.sourceforge.net/ (mirror site) https://github.com/Edrusb/DAR (source code repository) No answer to support requests will be done out of mailing-list or other public area: your questions and their answers may be of interest to others. For all and to all, if you need support thanks to read the link below: http://dar.linux.free.fr/support.html Sharing must be both directions. Else for non support requests only, you are welcome to send an email to Denis at dar.linux@free.fr paying attention to add the following string (the quotes are not necessary) "[EARTH IS BEAUTIFUL]" in the subject of your email, to be able to pass anti-spam filter. Here follows an extract of Denis Corbin's resume: 1990-1992 Classes Prepas (math. sup. math. spe. M) 1992-1995 ENSEIRB - Ecole Nationale Superieur d'Electronique Informatique et Radiocommunications de Bordeaux Promo I-1995. 1995 3 months Erasmus project in Bologna (Italy). 1995-1996 Military Service: 28e Regiment Transmission Armee de Terre Formation Equipex-RITA, operator on the RITTER network. Military training PMTE (Preparation Militaire Terre Encadrement). 1997-2000 Software developer, Team leader, Project leader for Netcomsystems (which was renamed Spirent Communications in 1999) Designed software for the Smartbits network tester. 2000-2002 Network Design, architecture and network support for DCN of Alcatel Submarine Networks. 2002-2003 DNS maintainer and Firewall admin for the GPRS network for Bouygues Telecom. 2003-2004 Network Design and support for SFR's wap and SMS platform, Managed change of ISP connectivity (activating BGP dual homing) for SFR with no loss of service. Nov. 2003 Cisco Certified CCNA 2004-2005 Validation responsible of the hosting infrastructure that provides data to UMTS mobile phones for SFR. Dec. 2004 Cisco Certified CCNP 2005-2011 Network and security engineer at Telindus France. July 2008 Cisco Certified Internetwork Expert, CCIE #21568 R&S June 2009 Checkpoint Certified Security Expert (CCSE) 2011-2015 Pre-sales engineer at Telindus for WAN optimization (Riverbed Solution), LAN Campus and Datacenter design and architecture (Cisco Nexus and Catalysts solutions). Nov. 2011 Riverbed certified RCSA Apr. 2012 Cisco Certified Sales Expert Oct. 2014 Network Solution Architect at HP (today Hewlett Packard Entreprise, aka HPE) Feb. 2015 HP Certified ASE Feb. 2016 HPE Certified Master ASE Nov. 2016 System Engineer at Aruba a Hewlett Packard Enterprise Company Jul. 2018 Recertified CCIE (10 years anniversary) Sep. 2018 Certified Aruba ACSPv1 Dec. 2018 System Engineer for the "Composable Fabric" SDN solution (Plexxi acquisition) at HPE July 2020 System Engineer for the Pensando solution (HPE-Pensando Partnership) Sep. 2020 HPE Hybrid IT v2 ATP certification Apr. 2021 Solution Architect for the Ezmeral software stack at HPE dar-2.7.17/missing0000755000175000017520000001533614215102164010704 00000000000000#! /bin/sh # Common wrapper for a few potentially missing GNU programs. scriptversion=2018-03-07.03; # UTC # Copyright (C) 1996-2021 Free Software Foundation, Inc. # Originally written by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try '$0 --help' for more information" exit 1 fi case $1 in --is-lightweight) # Used by our autoconf macros to check whether the available missing # script is modern enough. exit 0 ;; --run) # Back-compat with the calling convention used by older automake. shift ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due to PROGRAM being missing or too old. Options: -h, --help display this help and exit -v, --version output version information and exit Supported PROGRAM values: aclocal autoconf autoheader autom4te automake makeinfo bison yacc flex lex help2man Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and 'g' are ignored when checking the name. Send bug reports to ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing $scriptversion (GNU Automake)" exit $? ;; -*) echo 1>&2 "$0: unknown '$1' option" echo 1>&2 "Try '$0 --help' for more information" exit 1 ;; esac # Run the given program, remember its exit status. "$@"; st=$? # If it succeeded, we are done. test $st -eq 0 && exit 0 # Also exit now if we it failed (or wasn't found), and '--version' was # passed; such an option is passed most likely to detect whether the # program is present and works. case $2 in --version|--help) exit $st;; esac # Exit code 63 means version mismatch. This often happens when the user # tries to use an ancient version of a tool on a file that requires a # minimum version. if test $st -eq 63; then msg="probably too old" elif test $st -eq 127; then # Program was missing. msg="missing on your system" else # Program was found and executed, but failed. Give up. exit $st fi perl_URL=https://www.perl.org/ flex_URL=https://github.com/westes/flex gnu_software_URL=https://www.gnu.org/software program_details () { case $1 in aclocal|automake) echo "The '$1' program is part of the GNU Automake package:" echo "<$gnu_software_URL/automake>" echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/autoconf>" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; autoconf|autom4te|autoheader) echo "The '$1' program is part of the GNU Autoconf package:" echo "<$gnu_software_URL/autoconf/>" echo "It also requires GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; esac } give_advice () { # Normalize program name to check for. normalized_program=`echo "$1" | sed ' s/^gnu-//; t s/^gnu//; t s/^g//; t'` printf '%s\n' "'$1' is $msg." configure_deps="'configure.ac' or m4 files included by 'configure.ac'" case $normalized_program in autoconf*) echo "You should only need it if you modified 'configure.ac'," echo "or m4 files included by it." program_details 'autoconf' ;; autoheader*) echo "You should only need it if you modified 'acconfig.h' or" echo "$configure_deps." program_details 'autoheader' ;; automake*) echo "You should only need it if you modified 'Makefile.am' or" echo "$configure_deps." program_details 'automake' ;; aclocal*) echo "You should only need it if you modified 'acinclude.m4' or" echo "$configure_deps." program_details 'aclocal' ;; autom4te*) echo "You might have modified some maintainer files that require" echo "the 'autom4te' program to be rebuilt." program_details 'autom4te' ;; bison*|yacc*) echo "You should only need it if you modified a '.y' file." echo "You may want to install the GNU Bison package:" echo "<$gnu_software_URL/bison/>" ;; lex*|flex*) echo "You should only need it if you modified a '.l' file." echo "You may want to install the Fast Lexical Analyzer package:" echo "<$flex_URL>" ;; help2man*) echo "You should only need it if you modified a dependency" \ "of a man page." echo "You may want to install the GNU Help2man package:" echo "<$gnu_software_URL/help2man/>" ;; makeinfo*) echo "You should only need it if you modified a '.texi' file, or" echo "any other file indirectly affecting the aspect of the manual." echo "You might want to install the Texinfo package:" echo "<$gnu_software_URL/texinfo/>" echo "The spurious makeinfo call might also be the consequence of" echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might" echo "want to install GNU make:" echo "<$gnu_software_URL/make/>" ;; *) echo "You might have modified some files without having the proper" echo "tools for further handling them. Check the 'README' file, it" echo "often tells you about the needed prerequisites for installing" echo "this package. You may also peek at any GNU archive site, in" echo "case some other package contains this missing '$1' program." ;; esac } give_advice "$1" | sed -e '1s/^/WARNING: /' \ -e '2,$s/^/ /' >&2 # Propagate the correct exit status (expected to be 127 for a program # not found, 63 for a program that failed due to version mismatch). exit $st # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: dar-2.7.17/config.rpath0000755000175000017520000004421614767507774011650 00000000000000#! /bin/sh # Output a system dependent set of variables, describing how to set the # run time search path of shared libraries in an executable. # # Copyright 1996-2020 Free Software Foundation, Inc. # Taken from GNU libtool, 2001 # Originally by Gordon Matzigkeit , 1996 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # # The first argument passed to this file is the canonical host specification, # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # The environment variables CC, GCC, LDFLAGS, LD, with_gnu_ld # should be set by the caller. # # The set of defined variables is at the end of this script. # Known limitations: # - On IRIX 6.5 with CC="cc", the run time search patch must not be longer # than 256 bytes, otherwise the compiler driver will dump core. The only # known workaround is to choose shorter directory names for the build # directory and/or the installation directory. # All known linkers require a '.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a shrext=.so host="$1" host_cpu=`echo "$host" | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'` host_vendor=`echo "$host" | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'` host_os=`echo "$host" | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'` # Code taken from libtool.m4's _LT_CC_BASENAME. for cc_temp in $CC""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`echo "$cc_temp" | sed -e 's%^.*/%%'` # Code taken from libtool.m4's _LT_COMPILER_PIC. wl= if test "$GCC" = yes; then wl='-Wl,' else case "$host_os" in aix*) wl='-Wl,' ;; mingw* | cygwin* | pw32* | os2* | cegcc*) ;; hpux9* | hpux10* | hpux11*) wl='-Wl,' ;; irix5* | irix6* | nonstopux*) wl='-Wl,' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in ecc*) wl='-Wl,' ;; icc* | ifort*) wl='-Wl,' ;; lf95*) wl='-Wl,' ;; nagfor*) wl='-Wl,-Wl,,' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) wl='-Wl,' ;; ccc*) wl='-Wl,' ;; xl* | bgxl* | bgf* | mpixl*) wl='-Wl,' ;; como) wl='-lopt=' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ F* | *Sun*Fortran*) wl= ;; *Sun\ C*) wl='-Wl,' ;; esac ;; esac ;; newsos6) ;; *nto* | *qnx*) ;; osf3* | osf4* | osf5*) wl='-Wl,' ;; rdos*) ;; solaris*) case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) wl='-Qoption ld ' ;; *) wl='-Wl,' ;; esac ;; sunos4*) wl='-Qoption ld ' ;; sysv4 | sysv4.2uw2* | sysv4.3*) wl='-Wl,' ;; sysv4*MP*) ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) wl='-Wl,' ;; unicos*) wl='-Wl,' ;; uts4*) ;; esac fi # Code taken from libtool.m4's _LT_LINKER_SHLIBS. hardcode_libdir_flag_spec= hardcode_libdir_separator= hardcode_direct=no hardcode_minus_L=no case "$host_os" in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test "$GCC" != yes; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd*) with_gnu_ld=no ;; esac ld_shlibs=yes if test "$with_gnu_ld" = yes; then # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. # Unlike libtool, we use -rpath here, not --rpath, since the documented # option of GNU ld is called -rpath, not --rpath. hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' case "$host_os" in aix[3-9]*) # On AIX/PPC, the GNU linker is very broken if test "$host_cpu" != ia64; then ld_shlibs=no fi ;; amigaos*) case "$host_cpu" in powerpc) ;; m68k) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; beos*) if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then : else ld_shlibs=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. hardcode_libdir_flag_spec='-L$libdir' if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then : else ld_shlibs=no fi ;; haiku*) ;; interix[3-9]*) hardcode_direct=no hardcode_libdir_flag_spec='${wl}-rpath,$libdir' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then : else ld_shlibs=no fi ;; netbsd*) ;; solaris*) if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then ld_shlibs=no elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then : else ld_shlibs=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) ld_shlibs=no ;; *) if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then hardcode_libdir_flag_spec='`test -z "$SCOABSPATH" && echo ${wl}-rpath,$libdir`' else ld_shlibs=no fi ;; esac ;; sunos4*) hardcode_direct=yes ;; *) if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then : else ld_shlibs=no fi ;; esac if test "$ld_shlibs" = no; then hardcode_libdir_flag_spec= fi else case "$host_os" in aix3*) # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. hardcode_minus_L=yes if test "$GCC" = yes; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. hardcode_direct=unsupported fi ;; aix[4-9]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then aix_use_runtimelinking=yes break fi done ;; esac fi hardcode_direct=yes hardcode_libdir_separator=':' if test "$GCC" = yes; then case $host_os in aix4.[012]|aix4.[012].*) collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && \ strings "$collect2name" | grep resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct=unsupported hardcode_minus_L=yes hardcode_libdir_flag_spec='-L$libdir' hardcode_libdir_separator= fi ;; esac fi # Begin _LT_AC_SYS_LIBPATH_AIX. echo 'int main () { return 0; }' > conftest.c ${CC} ${LDFLAGS} conftest.c -o conftest aix_libpath=`dump -H conftest 2>/dev/null | sed -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; } }'` if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest 2>/dev/null | sed -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; } }'` fi if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib" fi rm -f conftest.c conftest # End _LT_AC_SYS_LIBPATH_AIX. if test "$aix_use_runtimelinking" = yes; then hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" else if test "$host_cpu" = ia64; then hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' else hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" fi fi ;; amigaos*) case "$host_cpu" in powerpc) ;; m68k) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; bsdi[45]*) ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. hardcode_libdir_flag_spec=' ' libext=lib ;; darwin* | rhapsody*) hardcode_direct=no if { case $cc_basename in ifort*) true;; *) test "$GCC" = yes;; esac; }; then : else ld_shlibs=no fi ;; dgux*) hardcode_libdir_flag_spec='-L$libdir' ;; freebsd2.[01]*) hardcode_direct=yes hardcode_minus_L=yes ;; freebsd* | dragonfly*) hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes ;; hpux9*) hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; hpux10*) if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes fi ;; hpux11*) if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: case $host_cpu in hppa*64*|ia64*) hardcode_direct=no ;; *) hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: ;; netbsd*) hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes ;; newsos6) hardcode_direct=yes hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: ;; *nto* | *qnx*) ;; openbsd*) if test -f /usr/libexec/ld.so; then hardcode_direct=yes if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then hardcode_libdir_flag_spec='${wl}-rpath,$libdir' else case "$host_os" in openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) hardcode_libdir_flag_spec='-R$libdir' ;; *) hardcode_libdir_flag_spec='${wl}-rpath,$libdir' ;; esac fi else ld_shlibs=no fi ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; osf3*) hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: ;; osf4* | osf5*) if test "$GCC" = yes; then hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' else # Both cc and cxx compiler support -rpath directly hardcode_libdir_flag_spec='-rpath $libdir' fi hardcode_libdir_separator=: ;; solaris*) hardcode_libdir_flag_spec='-R$libdir' ;; sunos4*) hardcode_libdir_flag_spec='-L$libdir' hardcode_direct=yes hardcode_minus_L=yes ;; sysv4) case $host_vendor in sni) hardcode_direct=yes # is this really true??? ;; siemens) hardcode_direct=no ;; motorola) hardcode_direct=no #Motorola manual says yes, but my tests say they lie ;; esac ;; sysv4.3*) ;; sysv4*MP*) if test -d /usr/nec; then ld_shlibs=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) ;; sysv5* | sco3.2v5* | sco5v6*) hardcode_libdir_flag_spec='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' hardcode_libdir_separator=':' ;; uts4*) hardcode_libdir_flag_spec='-L$libdir' ;; *) ld_shlibs=no ;; esac fi # Check dynamic linker characteristics # Code taken from libtool.m4's _LT_SYS_DYNAMIC_LINKER. # Unlike libtool.m4, here we don't care about _all_ names of the library, but # only about the one the linker finds when passed -lNAME. This is the last # element of library_names_spec in libtool.m4, or possibly two of them if the # linker has special search rules. library_names_spec= # the last element of library_names_spec in libtool.m4 libname_spec='lib$name' case "$host_os" in aix3*) library_names_spec='$libname.a' ;; aix[4-9]*) library_names_spec='$libname$shrext' ;; amigaos*) case "$host_cpu" in powerpc*) library_names_spec='$libname$shrext' ;; m68k) library_names_spec='$libname.a' ;; esac ;; beos*) library_names_spec='$libname$shrext' ;; bsdi[45]*) library_names_spec='$libname$shrext' ;; cygwin* | mingw* | pw32* | cegcc*) shrext=.dll library_names_spec='$libname.dll.a $libname.lib' ;; darwin* | rhapsody*) shrext=.dylib library_names_spec='$libname$shrext' ;; dgux*) library_names_spec='$libname$shrext' ;; freebsd[23].*) library_names_spec='$libname$shrext$versuffix' ;; freebsd* | dragonfly*) library_names_spec='$libname$shrext' ;; gnu*) library_names_spec='$libname$shrext' ;; haiku*) library_names_spec='$libname$shrext' ;; hpux9* | hpux10* | hpux11*) case $host_cpu in ia64*) shrext=.so ;; hppa*64*) shrext=.sl ;; *) shrext=.sl ;; esac library_names_spec='$libname$shrext' ;; interix[3-9]*) library_names_spec='$libname$shrext' ;; irix5* | irix6* | nonstopux*) library_names_spec='$libname$shrext' case "$host_os" in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= ;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 ;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 ;; *) libsuff= shlibsuff= ;; esac ;; esac ;; linux*oldld* | linux*aout* | linux*coff*) ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) library_names_spec='$libname$shrext' ;; knetbsd*-gnu) library_names_spec='$libname$shrext' ;; netbsd*) library_names_spec='$libname$shrext' ;; newsos6) library_names_spec='$libname$shrext' ;; *nto* | *qnx*) library_names_spec='$libname$shrext' ;; openbsd*) library_names_spec='$libname$shrext$versuffix' ;; os2*) libname_spec='$name' shrext=.dll library_names_spec='$libname.a' ;; osf3* | osf4* | osf5*) library_names_spec='$libname$shrext' ;; rdos*) ;; solaris*) library_names_spec='$libname$shrext' ;; sunos4*) library_names_spec='$libname$shrext$versuffix' ;; sysv4 | sysv4.3*) library_names_spec='$libname$shrext' ;; sysv4*MP*) library_names_spec='$libname$shrext' ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) library_names_spec='$libname$shrext' ;; tpf*) library_names_spec='$libname$shrext' ;; uts4*) library_names_spec='$libname$shrext' ;; esac sed_quote_subst='s/\(["`$\\]\)/\\\1/g' escaped_wl=`echo "X$wl" | sed -e 's/^X//' -e "$sed_quote_subst"` shlibext=`echo "$shrext" | sed -e 's,^\.,,'` escaped_libname_spec=`echo "X$libname_spec" | sed -e 's/^X//' -e "$sed_quote_subst"` escaped_library_names_spec=`echo "X$library_names_spec" | sed -e 's/^X//' -e "$sed_quote_subst"` escaped_hardcode_libdir_flag_spec=`echo "X$hardcode_libdir_flag_spec" | sed -e 's/^X//' -e "$sed_quote_subst"` LC_ALL=C sed -e 's/^\([a-zA-Z0-9_]*\)=/acl_cv_\1=/' <&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = . ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/gettext.m4 \ $(top_srcdir)/m4/host-cpu-c-abi.m4 $(top_srcdir)/m4/iconv.m4 \ $(top_srcdir)/m4/intlmacosx.m4 $(top_srcdir)/m4/lib-ld.m4 \ $(top_srcdir)/m4/lib-link.m4 $(top_srcdir)/m4/lib-prefix.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/nls.m4 \ $(top_srcdir)/m4/po.m4 $(top_srcdir)/m4/progtest.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \ $(am__configure_deps) $(dist_noinst_DATA) $(am__DIST_COMMON) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno config.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_HEADER = config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac DATA = $(dist_noinst_DATA) RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ cscope distdir distdir-am dist dist-all distcheck am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) \ config.h.in # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` DIST_SUBDIRS = $(SUBDIRS) am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/config.h.in \ ABOUT-NLS AUTHORS COPYING ChangeLog INSTALL NEWS README THANKS \ TODO compile config.guess config.rpath config.sub install-sh \ ltmain.sh missing DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ if test -d "$(distdir)"; then \ find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -rf "$(distdir)" \ || { sleep 5 && rm -rf "$(distdir)"; }; \ else :; fi am__post_remove_distdir = $(am__remove_distdir) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" DIST_ARCHIVES = $(distdir).tar.gz GZIP_ENV = --best DIST_TARGETS = dist-gzip # Exists only to be overridden by the user if desired. AM_DISTCHECK_DVI_TARGET = dvi distuninstallcheck_listfiles = find . -type f -print am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$' distcleancheck_listfiles = find . -type f -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CXXSTDFLAGS = @CXXSTDFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DOXYGEN_PROG = @DOXYGEN_PROG@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GETTEXT_MACRO_VERSION = @GETTEXT_MACRO_VERSION@ GMSGFMT = @GMSGFMT@ GMSGFMT_015 = @GMSGFMT_015@ GPGME_CFLAGS = @GPGME_CFLAGS@ GPGME_CONFIG = @GPGME_CONFIG@ GPGME_LIBS = @GPGME_LIBS@ GPGRT_CONFIG = @GPGRT_CONFIG@ GREP = @GREP@ HAS_DOT = @HAS_DOT@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ INTLLIBS = @INTLLIBS@ INTL_MACOSX_LIBS = @INTL_MACOSX_LIBS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL_CFLAGS = @LIBCURL_CFLAGS@ LIBCURL_LIBS = @LIBCURL_LIBS@ LIBICONV = @LIBICONV@ LIBINTL = @LIBINTL@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTHREADAR_CFLAGS = @LIBTHREADAR_CFLAGS@ LIBTHREADAR_LIBS = @LIBTHREADAR_LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBICONV = @LTLIBICONV@ LTLIBINTL = @LTLIBINTL@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MSGFMT = @MSGFMT@ MSGMERGE = @MSGMERGE@ MSGMERGE_FOR_MSGFMT_OPTION = @MSGMERGE_FOR_MSGFMT_OPTION@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ POSUB = @POSUB@ PYEXT = @PYEXT@ PYFLAGS = @PYFLAGS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ UPX_PROG = @UPX_PROG@ USE_NLS = @USE_NLS@ VERSION = @VERSION@ XGETTEXT = @XGETTEXT@ XGETTEXT_015 = @XGETTEXT_015@ XGETTEXT_EXTRA_OPTIONS = @XGETTEXT_EXTRA_OPTIONS@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dot = @dot@ doxygen = @doxygen@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ groff = @groff@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ pkgconfigdir = @pkgconfigdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ tmp = @tmp@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ upx = @upx@ SUBDIRS = man src doc misc po dist_noinst_DATA = INSTALL README THANKS TODO AUTHORS COPYING ChangeLog NEWS ABOUT-NLS CPPCHECKDIR = ./cppcheckbuilddir ACLOCAL_AMFLAGS = -I m4 all: config.h $(MAKE) $(AM_MAKEFLAGS) all-recursive .SUFFIXES: am--refresh: Makefile @: $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --gnu'; \ $(am__cd) $(srcdir) && $(AUTOMAKE) --gnu \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: $(am__configure_deps) $(am__cd) $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): $(am__aclocal_m4_deps) $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) $(am__aclocal_m4_deps): config.h: stamp-h1 @test -f $@ || rm -f stamp-h1 @test -f $@ || $(MAKE) $(AM_MAKEFLAGS) stamp-h1 stamp-h1: $(srcdir)/config.h.in $(top_builddir)/config.status @rm -f stamp-h1 cd $(top_builddir) && $(SHELL) ./config.status config.h $(srcdir)/config.h.in: $(am__configure_deps) ($(am__cd) $(top_srcdir) && $(AUTOHEADER)) rm -f stamp-h1 touch $@ distclean-hdr: -rm -f config.h stamp-h1 mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs distclean-libtool: -rm -f libtool config.lt # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscope: cscope.files test ! -s cscope.files \ || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS) clean-cscope: -rm -f cscope.files cscope.files: clean-cscope cscopelist cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags -rm -f cscope.out cscope.in.out cscope.po.out cscope.files distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) $(am__remove_distdir) test -d "$(distdir)" || mkdir "$(distdir)" @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done -test -n "$(am__skip_mode_fix)" \ || find "$(distdir)" -type d ! -perm -755 \ -exec chmod u+rwx,go+rx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz $(am__post_remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 $(am__post_remove_distdir) dist-lzip: distdir tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz $(am__post_remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz $(am__post_remove_distdir) dist-zstd: distdir tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst $(am__post_remove_distdir) dist-tarZ: distdir @echo WARNING: "Support for distribution archives compressed with" \ "legacy program 'compress' is deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__post_remove_distdir) dist-shar: distdir @echo WARNING: "Support for shar distribution archives is" \ "deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz $(am__post_remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__post_remove_distdir) dist dist-all: $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:' $(am__post_remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.lz*) \ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\ *.tar.xz*) \ xz -dc $(distdir).tar.xz | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ *.tar.zst*) \ zstd -dc $(distdir).tar.zst | $(am__untar) ;;\ esac chmod -R a-w $(distdir) chmod u+w $(distdir) mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst chmod a-w $(distdir) test -d $(distdir)/_build || exit 0; \ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build/sub \ && ../../configure \ $(AM_DISTCHECK_CONFIGURE_FLAGS) \ $(DISTCHECK_CONFIGURE_FLAGS) \ --srcdir=../.. --prefix="$$dc_install_base" \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 $(am__post_remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' distuninstallcheck: @test -n '$(distuninstallcheck_dir)' || { \ echo 'ERROR: trying to run $@ with an empty' \ '$$(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ $(am__cd) '$(distuninstallcheck_dir)' || { \ echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am check: check-recursive all-am: Makefile $(DATA) config.h installdirs: installdirs-recursive installdirs-am: install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool clean-local mostlyclean-am distclean: distclean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -f Makefile distclean-am: clean-am distclean-generic distclean-hdr \ distclean-libtool distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: .MAKE: $(am__recursive_targets) all install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am \ am--refresh check check-am clean clean-cscope clean-generic \ clean-libtool clean-local cscope cscopelist-am ctags ctags-am \ dist dist-all dist-bzip2 dist-gzip dist-lzip dist-shar \ dist-tarZ dist-xz dist-zip dist-zstd distcheck distclean \ distclean-generic distclean-hdr distclean-libtool \ distclean-tags distcleancheck distdir distuninstallcheck dvi \ dvi-am html html-am info info-am install install-am \ install-data install-data-am install-dvi install-dvi-am \ install-exec install-exec-am install-html install-html-am \ install-info install-info-am install-man install-pdf \ install-pdf-am install-ps install-ps-am install-strip \ installcheck installcheck-am installdirs installdirs-am \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am .PRECIOUS: Makefile cppcheck: @if cppcheck --help > /dev/null ; then \ ( mkdir -p $(CPPCHECKDIR) && cppcheck --force --file-filter="*.?pp" -isrc/testing -isrc/python --cppcheck-build-dir=$(CPPCHECKDIR) `pwd` ) || exit 1 ; \ else \ echo "cppcheck not present, aborting" || exit 1 ; \ fi clean-local: rm -rf $(CPPCHECKDIR) # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: dar-2.7.17/config.guess0000755000175000017520000014051214175772605011642 00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2022 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2022-01-09' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner; maintained since 2000 by Ben Elliston. # # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.guess # # Please send patches to . # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system \`$me' is run on. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright 1992-2022 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi # Just in case it came from the environment. GUESS= # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still # use `HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. tmp= # shellcheck disable=SC2172 trap 'test -z "$tmp" || rm -fr "$tmp"' 0 1 2 13 15 set_cc_for_build() { # prevent multiple calls if $tmp is already set test "$tmp" && return 0 : "${TMPDIR=/tmp}" # shellcheck disable=SC2039,SC3028 { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir "$tmp" 2>/dev/null) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir "$tmp" 2>/dev/null) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } dummy=$tmp/dummy case ${CC_FOR_BUILD-},${HOST_CC-},${CC-} in ,,) echo "int x;" > "$dummy.c" for driver in cc gcc c89 c99 ; do if ($driver -c -o "$dummy.o" "$dummy.c") >/dev/null 2>&1 ; then CC_FOR_BUILD=$driver break fi done if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac } # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if test -f /.attbin/uname ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown case $UNAME_SYSTEM in Linux|GNU|GNU/*) LIBC=unknown set_cc_for_build cat <<-EOF > "$dummy.c" #include #if defined(__UCLIBC__) LIBC=uclibc #elif defined(__dietlibc__) LIBC=dietlibc #elif defined(__GLIBC__) LIBC=gnu #else #include /* First heuristic to detect musl libc. */ #ifdef __DEFINED_va_list LIBC=musl #endif #endif EOF cc_set_libc=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^LIBC' | sed 's, ,,g'` eval "$cc_set_libc" # Second heuristic to detect musl libc. if [ "$LIBC" = unknown ] && command -v ldd >/dev/null && ldd --version 2>&1 | grep -q ^musl; then LIBC=musl fi # If the system lacks a compiler, then just pick glibc. # We could probably try harder. if [ "$LIBC" = unknown ]; then LIBC=gnu fi ;; esac # Note: order is significant - the case branches are not exclusive. case $UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \ /sbin/sysctl -n hw.machine_arch 2>/dev/null || \ /usr/sbin/sysctl -n hw.machine_arch 2>/dev/null || \ echo unknown)` case $UNAME_MACHINE_ARCH in aarch64eb) machine=aarch64_be-unknown ;; armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; earmv*) arch=`echo "$UNAME_MACHINE_ARCH" | sed -e 's,^e\(armv[0-9]\).*$,\1,'` endian=`echo "$UNAME_MACHINE_ARCH" | sed -ne 's,^.*\(eb\)$,\1,p'` machine=${arch}${endian}-unknown ;; *) machine=$UNAME_MACHINE_ARCH-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently (or will in the future) and ABI. case $UNAME_MACHINE_ARCH in earm*) os=netbsdelf ;; arm*|i386|m68k|ns32k|sh3*|sparc|vax) set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # Determine ABI tags. case $UNAME_MACHINE_ARCH in earm*) expr='s/^earmv[0-9]/-eabi/;s/eb$//' abi=`echo "$UNAME_MACHINE_ARCH" | sed -e "$expr"` ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case $UNAME_VERSION in Debian*) release='-gnu' ;; *) release=`echo "$UNAME_RELEASE" | sed -e 's/[-_].*//' | cut -d. -f1,2` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. GUESS=$machine-${os}${release}${abi-} ;; *:Bitrig:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-bitrig$UNAME_RELEASE ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-openbsd$UNAME_RELEASE ;; *:SecBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/SecBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-secbsd$UNAME_RELEASE ;; *:LibertyBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-libertybsd$UNAME_RELEASE ;; *:MidnightBSD:*:*) GUESS=$UNAME_MACHINE-unknown-midnightbsd$UNAME_RELEASE ;; *:ekkoBSD:*:*) GUESS=$UNAME_MACHINE-unknown-ekkobsd$UNAME_RELEASE ;; *:SolidBSD:*:*) GUESS=$UNAME_MACHINE-unknown-solidbsd$UNAME_RELEASE ;; *:OS108:*:*) GUESS=$UNAME_MACHINE-unknown-os108_$UNAME_RELEASE ;; macppc:MirBSD:*:*) GUESS=powerpc-unknown-mirbsd$UNAME_RELEASE ;; *:MirBSD:*:*) GUESS=$UNAME_MACHINE-unknown-mirbsd$UNAME_RELEASE ;; *:Sortix:*:*) GUESS=$UNAME_MACHINE-unknown-sortix ;; *:Twizzler:*:*) GUESS=$UNAME_MACHINE-unknown-twizzler ;; *:Redox:*:*) GUESS=$UNAME_MACHINE-unknown-redox ;; mips:OSF1:*.*) GUESS=mips-dec-osf1 ;; alpha:OSF1:*:*) # Reset EXIT trap before exiting to avoid spurious non-zero exit code. trap '' 0 case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case $ALPHA_CPU_TYPE in "EV4 (21064)") UNAME_MACHINE=alpha ;; "EV4.5 (21064)") UNAME_MACHINE=alpha ;; "LCA4 (21066/21068)") UNAME_MACHINE=alpha ;; "EV5 (21164)") UNAME_MACHINE=alphaev5 ;; "EV5.6 (21164A)") UNAME_MACHINE=alphaev56 ;; "EV5.6 (21164PC)") UNAME_MACHINE=alphapca56 ;; "EV5.7 (21164PC)") UNAME_MACHINE=alphapca57 ;; "EV6 (21264)") UNAME_MACHINE=alphaev6 ;; "EV6.7 (21264A)") UNAME_MACHINE=alphaev67 ;; "EV6.8CB (21264C)") UNAME_MACHINE=alphaev68 ;; "EV6.8AL (21264B)") UNAME_MACHINE=alphaev68 ;; "EV6.8CX (21264D)") UNAME_MACHINE=alphaev68 ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE=alphaev69 ;; "EV7 (21364)") UNAME_MACHINE=alphaev7 ;; "EV7.9 (21364A)") UNAME_MACHINE=alphaev79 ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. OSF_REL=`echo "$UNAME_RELEASE" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` GUESS=$UNAME_MACHINE-dec-osf$OSF_REL ;; Amiga*:UNIX_System_V:4.0:*) GUESS=m68k-unknown-sysv4 ;; *:[Aa]miga[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-amigaos ;; *:[Mm]orph[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-morphos ;; *:OS/390:*:*) GUESS=i370-ibm-openedition ;; *:z/VM:*:*) GUESS=s390-ibm-zvmoe ;; *:OS400:*:*) GUESS=powerpc-ibm-os400 ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) GUESS=arm-acorn-riscix$UNAME_RELEASE ;; arm*:riscos:*:*|arm*:RISCOS:*:*) GUESS=arm-unknown-riscos ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) GUESS=hppa1.1-hitachi-hiuxmpp ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. case `(/bin/universe) 2>/dev/null` in att) GUESS=pyramid-pyramid-sysv3 ;; *) GUESS=pyramid-pyramid-bsd ;; esac ;; NILE*:*:*:dcosx) GUESS=pyramid-pyramid-svr4 ;; DRS?6000:unix:4.0:6*) GUESS=sparc-icl-nx6 ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) GUESS=sparc-icl-nx7 ;; esac ;; s390x:SunOS:*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$UNAME_MACHINE-ibm-solaris2$SUN_REL ;; sun4H:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-hal-solaris2$SUN_REL ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris2$SUN_REL ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) GUESS=i386-pc-auroraux$UNAME_RELEASE ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) set_cc_for_build SUN_ARCH=i386 # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -m64 -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH=x86_64 fi fi SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$SUN_ARCH-pc-solaris2$SUN_REL ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris3$SUN_REL ;; sun4*:SunOS:*:*) case `/usr/bin/arch -k` in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like `4.1.3-JL'. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/'` GUESS=sparc-sun-sunos$SUN_REL ;; sun3*:SunOS:*:*) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x$UNAME_RELEASE" = x && UNAME_RELEASE=3 case `/bin/arch` in sun3) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun4) GUESS=sparc-sun-sunos$UNAME_RELEASE ;; esac ;; aushp:SunOS:*:*) GUESS=sparc-auspex-sunos$UNAME_RELEASE ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) GUESS=m68k-milan-mint$UNAME_RELEASE ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) GUESS=m68k-hades-mint$UNAME_RELEASE ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) GUESS=m68k-unknown-mint$UNAME_RELEASE ;; m68k:machten:*:*) GUESS=m68k-apple-machten$UNAME_RELEASE ;; powerpc:machten:*:*) GUESS=powerpc-apple-machten$UNAME_RELEASE ;; RISC*:Mach:*:*) GUESS=mips-dec-mach_bsd4.3 ;; RISC*:ULTRIX:*:*) GUESS=mips-dec-ultrix$UNAME_RELEASE ;; VAX*:ULTRIX*:*:*) GUESS=vax-dec-ultrix$UNAME_RELEASE ;; 2020:CLIX:*:* | 2430:CLIX:*:*) GUESS=clipper-intergraph-clix$UNAME_RELEASE ;; mips:*:*:UMIPS | mips:*:*:RISCos) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && dummyarg=`echo "$UNAME_RELEASE" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`"$dummy" "$dummyarg"` && { echo "$SYSTEM_NAME"; exit; } GUESS=mips-mips-riscos$UNAME_RELEASE ;; Motorola:PowerMAX_OS:*:*) GUESS=powerpc-motorola-powermax ;; Motorola:*:4.3:PL8-*) GUESS=powerpc-harris-powermax ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) GUESS=powerpc-harris-powermax ;; Night_Hawk:Power_UNIX:*:*) GUESS=powerpc-harris-powerunix ;; m88k:CX/UX:7*:*) GUESS=m88k-harris-cxux7 ;; m88k:*:4*:R4*) GUESS=m88k-motorola-sysv4 ;; m88k:*:3*:R3*) GUESS=m88k-motorola-sysv3 ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if test "$UNAME_PROCESSOR" = mc88100 || test "$UNAME_PROCESSOR" = mc88110 then if test "$TARGET_BINARY_INTERFACE"x = m88kdguxelfx || \ test "$TARGET_BINARY_INTERFACE"x = x then GUESS=m88k-dg-dgux$UNAME_RELEASE else GUESS=m88k-dg-dguxbcs$UNAME_RELEASE fi else GUESS=i586-dg-dgux$UNAME_RELEASE fi ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) GUESS=m88k-dolphin-sysv3 ;; M88*:*:R3*:*) # Delta 88k system running SVR3 GUESS=m88k-motorola-sysv3 ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) GUESS=m88k-tektronix-sysv3 ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) GUESS=m68k-tektronix-bsd ;; *:IRIX*:*:*) IRIX_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/g'` GUESS=mips-sgi-irix$IRIX_REL ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. GUESS=romp-ibm-aix # uname -m gives an 8 hex-code CPU id ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) GUESS=i386-ibm-aix ;; ia64:AIX:*:*) if test -x /usr/bin/oslevel ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$UNAME_MACHINE-ibm-aix$IBM_REV ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` then GUESS=$SYSTEM_NAME else GUESS=rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then GUESS=rs6000-ibm-aix3.2.4 else GUESS=rs6000-ibm-aix3.2 fi ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El "$IBM_CPU_ID" | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if test -x /usr/bin/lslpp ; then IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | \ awk -F: '{ print $3 }' | sed s/[0-9]*$/0/` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$IBM_ARCH-ibm-aix$IBM_REV ;; *:AIX:*:*) GUESS=rs6000-ibm-aix ;; ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*) GUESS=romp-ibm-bsd4.4 ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and GUESS=romp-ibm-bsd$UNAME_RELEASE # 4.3 with uname added to ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) GUESS=rs6000-bull-bosx ;; DPX/2?00:B.O.S.:*:*) GUESS=m68k-bull-sysv3 ;; 9000/[34]??:4.3bsd:1.*:*) GUESS=m68k-hp-bsd ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) GUESS=m68k-hp-bsd4.4 ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` case $UNAME_MACHINE in 9000/31?) HP_ARCH=m68000 ;; 9000/[34]??) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if test -x /usr/bin/getconf; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case $sc_cpu_version in 523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0 528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case $sc_kernel_bits in 32) HP_ARCH=hppa2.0n ;; 64) HP_ARCH=hppa2.0w ;; '') HP_ARCH=hppa2.0 ;; # HP-UX 10.20 esac ;; esac fi if test "$HP_ARCH" = ""; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS="" $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null) && HP_ARCH=`"$dummy"` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if test "$HP_ARCH" = hppa2.0w then set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH=hppa2.0w else HP_ARCH=hppa64 fi fi GUESS=$HP_ARCH-hp-hpux$HPUX_REV ;; ia64:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` GUESS=ia64-hp-hpux$HPUX_REV ;; 3050*:HI-UX:*:*) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } GUESS=unknown-hitachi-hiuxwe2 ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*) GUESS=hppa1.1-hp-bsd ;; 9000/8??:4.3bsd:*:*) GUESS=hppa1.0-hp-bsd ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) GUESS=hppa1.0-hp-mpeix ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*) GUESS=hppa1.1-hp-osf ;; hp8??:OSF1:*:*) GUESS=hppa1.0-hp-osf ;; i*86:OSF1:*:*) if test -x /usr/sbin/sysversion ; then GUESS=$UNAME_MACHINE-unknown-osf1mk else GUESS=$UNAME_MACHINE-unknown-osf1 fi ;; parisc*:Lites*:*:*) GUESS=hppa1.1-hp-lites ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) GUESS=c1-convex-bsd ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) GUESS=c34-convex-bsd ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) GUESS=c38-convex-bsd ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) GUESS=c4-convex-bsd ;; CRAY*Y-MP:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=ymp-cray-unicos$CRAY_REL ;; CRAY*[A-Z]90:*:*:*) echo "$UNAME_MACHINE"-cray-unicos"$UNAME_RELEASE" \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=t90-cray-unicos$CRAY_REL ;; CRAY*T3E:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=alphaev5-cray-unicosmk$CRAY_REL ;; CRAY*SV1:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=sv1-cray-unicos$CRAY_REL ;; *:UNICOS/mp:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=craynv-cray-unicosmp$CRAY_REL ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | sed -e 's/ /_/'` GUESS=${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'` GUESS=sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) GUESS=$UNAME_MACHINE-pc-bsdi$UNAME_RELEASE ;; sparc*:BSD/OS:*:*) GUESS=sparc-unknown-bsdi$UNAME_RELEASE ;; *:BSD/OS:*:*) GUESS=$UNAME_MACHINE-unknown-bsdi$UNAME_RELEASE ;; arm:FreeBSD:*:*) UNAME_PROCESSOR=`uname -p` set_cc_for_build if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabi else FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabihf fi ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`/usr/bin/uname -p` case $UNAME_PROCESSOR in amd64) UNAME_PROCESSOR=x86_64 ;; i386) UNAME_PROCESSOR=i586 ;; esac FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL ;; i*:CYGWIN*:*) GUESS=$UNAME_MACHINE-pc-cygwin ;; *:MINGW64*:*) GUESS=$UNAME_MACHINE-pc-mingw64 ;; *:MINGW*:*) GUESS=$UNAME_MACHINE-pc-mingw32 ;; *:MSYS*:*) GUESS=$UNAME_MACHINE-pc-msys ;; i*:PW*:*) GUESS=$UNAME_MACHINE-pc-pw32 ;; *:SerenityOS:*:*) GUESS=$UNAME_MACHINE-pc-serenity ;; *:Interix*:*) case $UNAME_MACHINE in x86) GUESS=i586-pc-interix$UNAME_RELEASE ;; authenticamd | genuineintel | EM64T) GUESS=x86_64-unknown-interix$UNAME_RELEASE ;; IA64) GUESS=ia64-unknown-interix$UNAME_RELEASE ;; esac ;; i*:UWIN*:*) GUESS=$UNAME_MACHINE-pc-uwin ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) GUESS=x86_64-pc-cygwin ;; prep*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=powerpcle-unknown-solaris2$SUN_REL ;; *:GNU:*:*) # the GNU system GNU_ARCH=`echo "$UNAME_MACHINE" | sed -e 's,[-/].*$,,'` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's,/.*$,,'` GUESS=$GNU_ARCH-unknown-$LIBC$GNU_REL ;; *:GNU/*:*:*) # other systems with GNU libc and userland GNU_SYS=`echo "$UNAME_SYSTEM" | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-$GNU_SYS$GNU_REL-$LIBC ;; *:Minix:*:*) GUESS=$UNAME_MACHINE-unknown-minix ;; aarch64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' /proc/cpuinfo 2>/dev/null` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC=gnulibc1 ; fi GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arc:Linux:*:* | arceb:Linux:*:* | arc32:Linux:*:* | arc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arm*:Linux:*:*) set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then GUESS=$UNAME_MACHINE-unknown-linux-$LIBC else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabi else GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabihf fi fi ;; avr32*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; cris:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; crisv32:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; e2k:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; frv:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; hexagon:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:Linux:*:*) GUESS=$UNAME_MACHINE-pc-linux-$LIBC ;; ia64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; k1om:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; loongarch32:Linux:*:* | loongarch64:Linux:*:* | loongarchx32:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m32r*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m68*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; mips:Linux:*:* | mips64:Linux:*:*) set_cc_for_build IS_GLIBC=0 test x"${LIBC}" = xgnu && IS_GLIBC=1 sed 's/^ //' << EOF > "$dummy.c" #undef CPU #undef mips #undef mipsel #undef mips64 #undef mips64el #if ${IS_GLIBC} && defined(_ABI64) LIBCABI=gnuabi64 #else #if ${IS_GLIBC} && defined(_ABIN32) LIBCABI=gnuabin32 #else LIBCABI=${LIBC} #endif #endif #if ${IS_GLIBC} && defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa64r6 #else #if ${IS_GLIBC} && !defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa32r6 #else #if defined(__mips64) CPU=mips64 #else CPU=mips #endif #endif #endif #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) MIPS_ENDIAN=el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) MIPS_ENDIAN= #else MIPS_ENDIAN= #endif #endif EOF cc_set_vars=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^CPU\|^MIPS_ENDIAN\|^LIBCABI'` eval "$cc_set_vars" test "x$CPU" != x && { echo "$CPU${MIPS_ENDIAN}-unknown-linux-$LIBCABI"; exit; } ;; mips64el:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; openrisc*:Linux:*:*) GUESS=or1k-unknown-linux-$LIBC ;; or32:Linux:*:* | or1k*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; padre:Linux:*:*) GUESS=sparc-unknown-linux-$LIBC ;; parisc64:Linux:*:* | hppa64:Linux:*:*) GUESS=hppa64-unknown-linux-$LIBC ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) GUESS=hppa1.1-unknown-linux-$LIBC ;; PA8*) GUESS=hppa2.0-unknown-linux-$LIBC ;; *) GUESS=hppa-unknown-linux-$LIBC ;; esac ;; ppc64:Linux:*:*) GUESS=powerpc64-unknown-linux-$LIBC ;; ppc:Linux:*:*) GUESS=powerpc-unknown-linux-$LIBC ;; ppc64le:Linux:*:*) GUESS=powerpc64le-unknown-linux-$LIBC ;; ppcle:Linux:*:*) GUESS=powerpcle-unknown-linux-$LIBC ;; riscv32:Linux:*:* | riscv32be:Linux:*:* | riscv64:Linux:*:* | riscv64be:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; s390:Linux:*:* | s390x:Linux:*:*) GUESS=$UNAME_MACHINE-ibm-linux-$LIBC ;; sh64*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sh*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sparc:Linux:*:* | sparc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; tile*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; vax:Linux:*:*) GUESS=$UNAME_MACHINE-dec-linux-$LIBC ;; x86_64:Linux:*:*) set_cc_for_build LIBCABI=$LIBC if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __ILP32__'; echo IS_X32; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_X32 >/dev/null then LIBCABI=${LIBC}x32 fi fi GUESS=$UNAME_MACHINE-pc-linux-$LIBCABI ;; xtensa*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. GUESS=i386-sequent-sysv4 ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. GUESS=$UNAME_MACHINE-pc-sysv4.2uw$UNAME_VERSION ;; i*86:OS/2:*:*) # If we were able to find `uname', then EMX Unix compatibility # is probably installed. GUESS=$UNAME_MACHINE-pc-os2-emx ;; i*86:XTS-300:*:STOP) GUESS=$UNAME_MACHINE-unknown-stop ;; i*86:atheos:*:*) GUESS=$UNAME_MACHINE-unknown-atheos ;; i*86:syllable:*:*) GUESS=$UNAME_MACHINE-pc-syllable ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) GUESS=i386-unknown-lynxos$UNAME_RELEASE ;; i*86:*DOS:*:*) GUESS=$UNAME_MACHINE-pc-msdosdjgpp ;; i*86:*:4.*:*) UNAME_REL=`echo "$UNAME_RELEASE" | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then GUESS=$UNAME_MACHINE-univel-sysv$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv$UNAME_REL fi ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac GUESS=$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 GUESS=$UNAME_MACHINE-pc-sco$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv32 fi ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configure will decide that # this is a cross-build. GUESS=i586-pc-msdosdjgpp ;; Intel:Mach:3*:*) GUESS=i386-pc-mach3 ;; paragon:*:*:*) GUESS=i860-intel-osf1 ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then GUESS=i860-stardent-sysv$UNAME_RELEASE # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. GUESS=i860-unknown-sysv$UNAME_RELEASE # Unknown i860-SVR4 fi ;; mini*:CTIX:SYS*5:*) # "miniframe" GUESS=m68010-convergent-sysv ;; mc68k:UNIX:SYSTEM5:3.51m) GUESS=m68k-convergent-sysv ;; M680?0:D-NIX:5.3:*) GUESS=m68k-diab-dnix ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) GUESS=m68k-unknown-lynxos$UNAME_RELEASE ;; mc68030:UNIX_System_V:4.*:*) GUESS=m68k-atari-sysv4 ;; TSUNAMI:LynxOS:2.*:*) GUESS=sparc-unknown-lynxos$UNAME_RELEASE ;; rs6000:LynxOS:2.*:*) GUESS=rs6000-unknown-lynxos$UNAME_RELEASE ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) GUESS=powerpc-unknown-lynxos$UNAME_RELEASE ;; SM[BE]S:UNIX_SV:*:*) GUESS=mips-dde-sysv$UNAME_RELEASE ;; RM*:ReliantUNIX-*:*:*) GUESS=mips-sni-sysv4 ;; RM*:SINIX-*:*:*) GUESS=mips-sni-sysv4 ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` GUESS=$UNAME_MACHINE-sni-sysv4 else GUESS=ns32k-sni-sysv fi ;; PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort # says GUESS=i586-unisys-sysv4 ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm GUESS=hppa1.1-stratus-sysv4 ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. GUESS=i860-stratus-sysv4 ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. GUESS=$UNAME_MACHINE-stratus-vos ;; *:VOS:*:*) # From Paul.Green@stratus.com. GUESS=hppa1.1-stratus-vos ;; mc68*:A/UX:*:*) GUESS=m68k-apple-aux$UNAME_RELEASE ;; news*:NEWS-OS:6*:*) GUESS=mips-sony-newsos6 ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if test -d /usr/nec; then GUESS=mips-nec-sysv$UNAME_RELEASE else GUESS=mips-unknown-sysv$UNAME_RELEASE fi ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. GUESS=powerpc-be-beos ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. GUESS=powerpc-apple-beos ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. GUESS=i586-pc-beos ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. GUESS=i586-pc-haiku ;; x86_64:Haiku:*:*) GUESS=x86_64-unknown-haiku ;; SX-4:SUPER-UX:*:*) GUESS=sx4-nec-superux$UNAME_RELEASE ;; SX-5:SUPER-UX:*:*) GUESS=sx5-nec-superux$UNAME_RELEASE ;; SX-6:SUPER-UX:*:*) GUESS=sx6-nec-superux$UNAME_RELEASE ;; SX-7:SUPER-UX:*:*) GUESS=sx7-nec-superux$UNAME_RELEASE ;; SX-8:SUPER-UX:*:*) GUESS=sx8-nec-superux$UNAME_RELEASE ;; SX-8R:SUPER-UX:*:*) GUESS=sx8r-nec-superux$UNAME_RELEASE ;; SX-ACE:SUPER-UX:*:*) GUESS=sxace-nec-superux$UNAME_RELEASE ;; Power*:Rhapsody:*:*) GUESS=powerpc-apple-rhapsody$UNAME_RELEASE ;; *:Rhapsody:*:*) GUESS=$UNAME_MACHINE-apple-rhapsody$UNAME_RELEASE ;; arm64:Darwin:*:*) GUESS=aarch64-apple-darwin$UNAME_RELEASE ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` case $UNAME_PROCESSOR in unknown) UNAME_PROCESSOR=powerpc ;; esac if command -v xcode-select > /dev/null 2> /dev/null && \ ! xcode-select --print-path > /dev/null 2> /dev/null ; then # Avoid executing cc if there is no toolchain installed as # cc will be a stub that puts up a graphical alert # prompting the user to install developer tools. CC_FOR_BUILD=no_compiler_found else set_cc_for_build fi if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then case $UNAME_PROCESSOR in i386) UNAME_PROCESSOR=x86_64 ;; powerpc) UNAME_PROCESSOR=powerpc64 ;; esac fi # On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_PPC >/dev/null then UNAME_PROCESSOR=powerpc fi elif test "$UNAME_PROCESSOR" = i386 ; then # uname -m returns i386 or x86_64 UNAME_PROCESSOR=$UNAME_MACHINE fi GUESS=$UNAME_PROCESSOR-apple-darwin$UNAME_RELEASE ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = x86; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi GUESS=$UNAME_PROCESSOR-$UNAME_MACHINE-nto-qnx$UNAME_RELEASE ;; *:QNX:*:4*) GUESS=i386-pc-qnx ;; NEO-*:NONSTOP_KERNEL:*:*) GUESS=neo-tandem-nsk$UNAME_RELEASE ;; NSE-*:NONSTOP_KERNEL:*:*) GUESS=nse-tandem-nsk$UNAME_RELEASE ;; NSR-*:NONSTOP_KERNEL:*:*) GUESS=nsr-tandem-nsk$UNAME_RELEASE ;; NSV-*:NONSTOP_KERNEL:*:*) GUESS=nsv-tandem-nsk$UNAME_RELEASE ;; NSX-*:NONSTOP_KERNEL:*:*) GUESS=nsx-tandem-nsk$UNAME_RELEASE ;; *:NonStop-UX:*:*) GUESS=mips-compaq-nonstopux ;; BS2000:POSIX*:*:*) GUESS=bs2000-siemens-sysv ;; DS/*:UNIX_System_V:*:*) GUESS=$UNAME_MACHINE-$UNAME_SYSTEM-$UNAME_RELEASE ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "${cputype-}" = 386; then UNAME_MACHINE=i386 elif test "x${cputype-}" != x; then UNAME_MACHINE=$cputype fi GUESS=$UNAME_MACHINE-unknown-plan9 ;; *:TOPS-10:*:*) GUESS=pdp10-unknown-tops10 ;; *:TENEX:*:*) GUESS=pdp10-unknown-tenex ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) GUESS=pdp10-dec-tops20 ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) GUESS=pdp10-xkl-tops20 ;; *:TOPS-20:*:*) GUESS=pdp10-unknown-tops20 ;; *:ITS:*:*) GUESS=pdp10-unknown-its ;; SEI:*:*:SEIUX) GUESS=mips-sei-seiux$UNAME_RELEASE ;; *:DragonFly:*:*) DRAGONFLY_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-dragonfly$DRAGONFLY_REL ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case $UNAME_MACHINE in A*) GUESS=alpha-dec-vms ;; I*) GUESS=ia64-dec-vms ;; V*) GUESS=vax-dec-vms ;; esac ;; *:XENIX:*:SysV) GUESS=i386-pc-xenix ;; i*86:skyos:*:*) SKYOS_REL=`echo "$UNAME_RELEASE" | sed -e 's/ .*$//'` GUESS=$UNAME_MACHINE-pc-skyos$SKYOS_REL ;; i*86:rdos:*:*) GUESS=$UNAME_MACHINE-pc-rdos ;; i*86:Fiwix:*:*) GUESS=$UNAME_MACHINE-pc-fiwix ;; *:AROS:*:*) GUESS=$UNAME_MACHINE-unknown-aros ;; x86_64:VMkernel:*:*) GUESS=$UNAME_MACHINE-unknown-esx ;; amd64:Isilon\ OneFS:*:*) GUESS=x86_64-unknown-onefs ;; *:Unleashed:*:*) GUESS=$UNAME_MACHINE-unknown-unleashed$UNAME_RELEASE ;; esac # Do we have a guess based on uname results? if test "x$GUESS" != x; then echo "$GUESS" exit fi # No uname command or uname output not recognized. set_cc_for_build cat > "$dummy.c" < #include #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined (vax) || defined (__vax) || defined (__vax__) || defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #include #if defined(_SIZE_T_) || defined(SIGLOST) #include #endif #endif #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) #if !defined (ultrix) #include #if defined (BSD) #if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); #else #if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); #else printf ("vax-dec-bsd\n"); exit (0); #endif #endif #else printf ("vax-dec-bsd\n"); exit (0); #endif #else #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname un; uname (&un); printf ("vax-dec-ultrix%s\n", un.release); exit (0); #else printf ("vax-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname *un; uname (&un); printf ("mips-dec-ultrix%s\n", un.release); exit (0); #else printf ("mips-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo "$ISP-apollo-$SYSTYPE"; exit; } echo "$0: unable to guess system type" >&2 case $UNAME_MACHINE:$UNAME_SYSTEM in mips:Linux | mips64:Linux) # If we got here on MIPS GNU/Linux, output extra information. cat >&2 <&2 <&2 </dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = "$UNAME_MACHINE" UNAME_RELEASE = "$UNAME_RELEASE" UNAME_SYSTEM = "$UNAME_SYSTEM" UNAME_VERSION = "$UNAME_VERSION" EOF fi exit 1 # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: dar-2.7.17/config.sub0000755000175000017520000010511614175772605011306 00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2022 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2022-01-03' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches to . # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.sub # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS Canonicalize a configuration name. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright 1992-2022 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; *local*) # First pass through any local machine types. echo "$1" exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Split fields of configuration type # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read field1 field2 field3 field4 <&2 exit 1 ;; *-*-*-*) basic_machine=$field1-$field2 basic_os=$field3-$field4 ;; *-*-*) # Ambiguous whether COMPANY is present, or skipped and KERNEL-OS is two # parts maybe_os=$field2-$field3 case $maybe_os in nto-qnx* | linux-* | uclinux-uclibc* \ | uclinux-gnu* | kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* \ | netbsd*-eabi* | kopensolaris*-gnu* | cloudabi*-eabi* \ | storm-chaos* | os2-emx* | rtmk-nova*) basic_machine=$field1 basic_os=$maybe_os ;; android-linux) basic_machine=$field1-unknown basic_os=linux-android ;; *) basic_machine=$field1-$field2 basic_os=$field3 ;; esac ;; *-*) # A lone config we happen to match not fitting any pattern case $field1-$field2 in decstation-3100) basic_machine=mips-dec basic_os= ;; *-*) # Second component is usually, but not always the OS case $field2 in # Prevent following clause from handling this valid os sun*os*) basic_machine=$field1 basic_os=$field2 ;; zephyr*) basic_machine=$field1-unknown basic_os=$field2 ;; # Manufacturers dec* | mips* | sequent* | encore* | pc533* | sgi* | sony* \ | att* | 7300* | 3300* | delta* | motorola* | sun[234]* \ | unicom* | ibm* | next | hp | isi* | apollo | altos* \ | convergent* | ncr* | news | 32* | 3600* | 3100* \ | hitachi* | c[123]* | convex* | sun | crds | omron* | dg \ | ultra | tti* | harris | dolphin | highlevel | gould \ | cbm | ns | masscomp | apple | axis | knuth | cray \ | microblaze* | sim | cisco \ | oki | wec | wrs | winbond) basic_machine=$field1-$field2 basic_os= ;; *) basic_machine=$field1 basic_os=$field2 ;; esac ;; esac ;; *) # Convert single-component short-hands not valid as part of # multi-component configurations. case $field1 in 386bsd) basic_machine=i386-pc basic_os=bsd ;; a29khif) basic_machine=a29k-amd basic_os=udi ;; adobe68k) basic_machine=m68010-adobe basic_os=scout ;; alliant) basic_machine=fx80-alliant basic_os= ;; altos | altos3068) basic_machine=m68k-altos basic_os= ;; am29k) basic_machine=a29k-none basic_os=bsd ;; amdahl) basic_machine=580-amdahl basic_os=sysv ;; amiga) basic_machine=m68k-unknown basic_os= ;; amigaos | amigados) basic_machine=m68k-unknown basic_os=amigaos ;; amigaunix | amix) basic_machine=m68k-unknown basic_os=sysv4 ;; apollo68) basic_machine=m68k-apollo basic_os=sysv ;; apollo68bsd) basic_machine=m68k-apollo basic_os=bsd ;; aros) basic_machine=i386-pc basic_os=aros ;; aux) basic_machine=m68k-apple basic_os=aux ;; balance) basic_machine=ns32k-sequent basic_os=dynix ;; blackfin) basic_machine=bfin-unknown basic_os=linux ;; cegcc) basic_machine=arm-unknown basic_os=cegcc ;; convex-c1) basic_machine=c1-convex basic_os=bsd ;; convex-c2) basic_machine=c2-convex basic_os=bsd ;; convex-c32) basic_machine=c32-convex basic_os=bsd ;; convex-c34) basic_machine=c34-convex basic_os=bsd ;; convex-c38) basic_machine=c38-convex basic_os=bsd ;; cray) basic_machine=j90-cray basic_os=unicos ;; crds | unos) basic_machine=m68k-crds basic_os= ;; da30) basic_machine=m68k-da30 basic_os= ;; decstation | pmax | pmin | dec3100 | decstatn) basic_machine=mips-dec basic_os= ;; delta88) basic_machine=m88k-motorola basic_os=sysv3 ;; dicos) basic_machine=i686-pc basic_os=dicos ;; djgpp) basic_machine=i586-pc basic_os=msdosdjgpp ;; ebmon29k) basic_machine=a29k-amd basic_os=ebmon ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson basic_os=ose ;; gmicro) basic_machine=tron-gmicro basic_os=sysv ;; go32) basic_machine=i386-pc basic_os=go32 ;; h8300hms) basic_machine=h8300-hitachi basic_os=hms ;; h8300xray) basic_machine=h8300-hitachi basic_os=xray ;; h8500hms) basic_machine=h8500-hitachi basic_os=hms ;; harris) basic_machine=m88k-harris basic_os=sysv3 ;; hp300 | hp300hpux) basic_machine=m68k-hp basic_os=hpux ;; hp300bsd) basic_machine=m68k-hp basic_os=bsd ;; hppaosf) basic_machine=hppa1.1-hp basic_os=osf ;; hppro) basic_machine=hppa1.1-hp basic_os=proelf ;; i386mach) basic_machine=i386-mach basic_os=mach ;; isi68 | isi) basic_machine=m68k-isi basic_os=sysv ;; m68knommu) basic_machine=m68k-unknown basic_os=linux ;; magnum | m3230) basic_machine=mips-mips basic_os=sysv ;; merlin) basic_machine=ns32k-utek basic_os=sysv ;; mingw64) basic_machine=x86_64-pc basic_os=mingw64 ;; mingw32) basic_machine=i686-pc basic_os=mingw32 ;; mingw32ce) basic_machine=arm-unknown basic_os=mingw32ce ;; monitor) basic_machine=m68k-rom68k basic_os=coff ;; morphos) basic_machine=powerpc-unknown basic_os=morphos ;; moxiebox) basic_machine=moxie-unknown basic_os=moxiebox ;; msdos) basic_machine=i386-pc basic_os=msdos ;; msys) basic_machine=i686-pc basic_os=msys ;; mvs) basic_machine=i370-ibm basic_os=mvs ;; nacl) basic_machine=le32-unknown basic_os=nacl ;; ncr3000) basic_machine=i486-ncr basic_os=sysv4 ;; netbsd386) basic_machine=i386-pc basic_os=netbsd ;; netwinder) basic_machine=armv4l-rebel basic_os=linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony basic_os=newsos ;; news1000) basic_machine=m68030-sony basic_os=newsos ;; necv70) basic_machine=v70-nec basic_os=sysv ;; nh3000) basic_machine=m68k-harris basic_os=cxux ;; nh[45]000) basic_machine=m88k-harris basic_os=cxux ;; nindy960) basic_machine=i960-intel basic_os=nindy ;; mon960) basic_machine=i960-intel basic_os=mon960 ;; nonstopux) basic_machine=mips-compaq basic_os=nonstopux ;; os400) basic_machine=powerpc-ibm basic_os=os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson basic_os=ose ;; os68k) basic_machine=m68k-none basic_os=os68k ;; paragon) basic_machine=i860-intel basic_os=osf ;; parisc) basic_machine=hppa-unknown basic_os=linux ;; psp) basic_machine=mipsallegrexel-sony basic_os=psp ;; pw32) basic_machine=i586-unknown basic_os=pw32 ;; rdos | rdos64) basic_machine=x86_64-pc basic_os=rdos ;; rdos32) basic_machine=i386-pc basic_os=rdos ;; rom68k) basic_machine=m68k-rom68k basic_os=coff ;; sa29200) basic_machine=a29k-amd basic_os=udi ;; sei) basic_machine=mips-sei basic_os=seiux ;; sequent) basic_machine=i386-sequent basic_os= ;; sps7) basic_machine=m68k-bull basic_os=sysv2 ;; st2000) basic_machine=m68k-tandem basic_os= ;; stratus) basic_machine=i860-stratus basic_os=sysv4 ;; sun2) basic_machine=m68000-sun basic_os= ;; sun2os3) basic_machine=m68000-sun basic_os=sunos3 ;; sun2os4) basic_machine=m68000-sun basic_os=sunos4 ;; sun3) basic_machine=m68k-sun basic_os= ;; sun3os3) basic_machine=m68k-sun basic_os=sunos3 ;; sun3os4) basic_machine=m68k-sun basic_os=sunos4 ;; sun4) basic_machine=sparc-sun basic_os= ;; sun4os3) basic_machine=sparc-sun basic_os=sunos3 ;; sun4os4) basic_machine=sparc-sun basic_os=sunos4 ;; sun4sol2) basic_machine=sparc-sun basic_os=solaris2 ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun basic_os= ;; sv1) basic_machine=sv1-cray basic_os=unicos ;; symmetry) basic_machine=i386-sequent basic_os=dynix ;; t3e) basic_machine=alphaev5-cray basic_os=unicos ;; t90) basic_machine=t90-cray basic_os=unicos ;; toad1) basic_machine=pdp10-xkl basic_os=tops20 ;; tpf) basic_machine=s390x-ibm basic_os=tpf ;; udi29k) basic_machine=a29k-amd basic_os=udi ;; ultra3) basic_machine=a29k-nyu basic_os=sym1 ;; v810 | necv810) basic_machine=v810-nec basic_os=none ;; vaxv) basic_machine=vax-dec basic_os=sysv ;; vms) basic_machine=vax-dec basic_os=vms ;; vsta) basic_machine=i386-pc basic_os=vsta ;; vxworks960) basic_machine=i960-wrs basic_os=vxworks ;; vxworks68) basic_machine=m68k-wrs basic_os=vxworks ;; vxworks29k) basic_machine=a29k-wrs basic_os=vxworks ;; xbox) basic_machine=i686-pc basic_os=mingw32 ;; ymp) basic_machine=ymp-cray basic_os=unicos ;; *) basic_machine=$1 basic_os= ;; esac ;; esac # Decode 1-component or ad-hoc basic machines case $basic_machine in # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) cpu=hppa1.1 vendor=winbond ;; op50n) cpu=hppa1.1 vendor=oki ;; op60c) cpu=hppa1.1 vendor=oki ;; ibm*) cpu=i370 vendor=ibm ;; orion105) cpu=clipper vendor=highlevel ;; mac | mpw | mac-mpw) cpu=m68k vendor=apple ;; pmac | pmac-mpw) cpu=powerpc vendor=apple ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) cpu=m68000 vendor=att ;; 3b*) cpu=we32k vendor=att ;; bluegene*) cpu=powerpc vendor=ibm basic_os=cnk ;; decsystem10* | dec10*) cpu=pdp10 vendor=dec basic_os=tops10 ;; decsystem20* | dec20*) cpu=pdp10 vendor=dec basic_os=tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) cpu=m68k vendor=motorola ;; dpx2*) cpu=m68k vendor=bull basic_os=sysv3 ;; encore | umax | mmax) cpu=ns32k vendor=encore ;; elxsi) cpu=elxsi vendor=elxsi basic_os=${basic_os:-bsd} ;; fx2800) cpu=i860 vendor=alliant ;; genix) cpu=ns32k vendor=ns ;; h3050r* | hiux*) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) cpu=m68000 vendor=hp ;; hp9k3[2-9][0-9]) cpu=m68k vendor=hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) cpu=hppa1.1 vendor=hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; i*86v32) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv32 ;; i*86v4*) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv4 ;; i*86v) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv ;; i*86sol2) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=solaris2 ;; j90 | j90-cray) cpu=j90 vendor=cray basic_os=${basic_os:-unicos} ;; iris | iris4d) cpu=mips vendor=sgi case $basic_os in irix*) ;; *) basic_os=irix4 ;; esac ;; miniframe) cpu=m68000 vendor=convergent ;; *mint | mint[0-9]* | *MiNT | *MiNT[0-9]*) cpu=m68k vendor=atari basic_os=mint ;; news-3600 | risc-news) cpu=mips vendor=sony basic_os=newsos ;; next | m*-next) cpu=m68k vendor=next case $basic_os in openstep*) ;; nextstep*) ;; ns2*) basic_os=nextstep2 ;; *) basic_os=nextstep3 ;; esac ;; np1) cpu=np1 vendor=gould ;; op50n-* | op60c-*) cpu=hppa1.1 vendor=oki basic_os=proelf ;; pa-hitachi) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; pbd) cpu=sparc vendor=tti ;; pbb) cpu=m68k vendor=tti ;; pc532) cpu=ns32k vendor=pc532 ;; pn) cpu=pn vendor=gould ;; power) cpu=power vendor=ibm ;; ps2) cpu=i386 vendor=ibm ;; rm[46]00) cpu=mips vendor=siemens ;; rtpc | rtpc-*) cpu=romp vendor=ibm ;; sde) cpu=mipsisa32 vendor=sde basic_os=${basic_os:-elf} ;; simso-wrs) cpu=sparclite vendor=wrs basic_os=vxworks ;; tower | tower-32) cpu=m68k vendor=ncr ;; vpp*|vx|vx-*) cpu=f301 vendor=fujitsu ;; w65) cpu=w65 vendor=wdc ;; w89k-*) cpu=hppa1.1 vendor=winbond basic_os=proelf ;; none) cpu=none vendor=none ;; leon|leon[3-9]) cpu=sparc vendor=$basic_machine ;; leon-*|leon[3-9]-*) cpu=sparc vendor=`echo "$basic_machine" | sed 's/-.*//'` ;; *-*) # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read cpu vendor <&2 exit 1 ;; esac ;; esac # Here we canonicalize certain aliases for manufacturers. case $vendor in digital*) vendor=dec ;; commodore*) vendor=cbm ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if test x$basic_os != x then # First recognize some ad-hoc cases, or perhaps split kernel-os, or else just # set os. case $basic_os in gnu/linux*) kernel=linux os=`echo "$basic_os" | sed -e 's|gnu/linux|gnu|'` ;; os2-emx) kernel=os2 os=`echo "$basic_os" | sed -e 's|os2-emx|emx|'` ;; nto-qnx*) kernel=nto os=`echo "$basic_os" | sed -e 's|nto-qnx|qnx|'` ;; *-*) # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read kernel os <&2 exit 1 ;; esac # As a final step for OS-related things, validate the OS-kernel combination # (given a valid OS), if there is a kernel. case $kernel-$os in linux-gnu* | linux-dietlibc* | linux-android* | linux-newlib* \ | linux-musl* | linux-relibc* | linux-uclibc* ) ;; uclinux-uclibc* ) ;; -dietlibc* | -newlib* | -musl* | -relibc* | -uclibc* ) # These are just libc implementations, not actual OSes, and thus # require a kernel. echo "Invalid configuration \`$1': libc \`$os' needs explicit kernel." 1>&2 exit 1 ;; kfreebsd*-gnu* | kopensolaris*-gnu*) ;; vxworks-simlinux | vxworks-simwindows | vxworks-spe) ;; nto-qnx*) ;; os2-emx) ;; *-eabi* | *-gnueabi*) ;; -*) # Blank kernel with real OS is always fine. ;; *-*) echo "Invalid configuration \`$1': Kernel \`$kernel' not known to work with OS \`$os'." 1>&2 exit 1 ;; esac # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. case $vendor in unknown) case $cpu-$os in *-riscix*) vendor=acorn ;; *-sunos*) vendor=sun ;; *-cnk* | *-aix*) vendor=ibm ;; *-beos*) vendor=be ;; *-hpux*) vendor=hp ;; *-mpeix*) vendor=hp ;; *-hiux*) vendor=hitachi ;; *-unos*) vendor=crds ;; *-dgux*) vendor=dg ;; *-luna*) vendor=omron ;; *-genix*) vendor=ns ;; *-clix*) vendor=intergraph ;; *-mvs* | *-opened*) vendor=ibm ;; *-os400*) vendor=ibm ;; s390-* | s390x-*) vendor=ibm ;; *-ptx*) vendor=sequent ;; *-tpf*) vendor=ibm ;; *-vxsim* | *-vxworks* | *-windiss*) vendor=wrs ;; *-aux*) vendor=apple ;; *-hms*) vendor=hitachi ;; *-mpw* | *-macos*) vendor=apple ;; *-*mint | *-mint[0-9]* | *-*MiNT | *-MiNT[0-9]*) vendor=atari ;; *-vos*) vendor=stratus ;; esac ;; esac echo "$cpu-$vendor-${kernel:+$kernel-}$os" exit # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: dar-2.7.17/INSTALL0000644000175000017520000000054014767500073010342 00000000000000 I N S T A L L A T I O N I N S T R U C T I O N S For the impatients using a released or pre-release package: ./configure make make install-strip for the others and for more detailed information, please read the complete documentation pointing your html browser to: doc/from_sources.html dar-2.7.17/ABOUT-NLS0000644000175000017520000000010314767507774010552 00000000000000 dar-2.7.17/configure.ac0000444000175000017520000037124514767507774011631 00000000000000####################################################################### # dar - disk archive - a backup/restoration program # Copyright (C) 2002-2025 Denis Corbin # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # # to contact the author, see the AUTHOR file ####################################################################### # Process this file with autoconf to produce a configure script. AC_PREREQ(2.69) AC_INIT([DAR], [2.7.17], [[https://github.com/Edrusb/DAR/issues]]) AC_CONFIG_HEADERS([config.h]) AC_LANG([C++]) AC_CONFIG_SRCDIR([src/libdar/catalogue.cpp]) AC_DEFINE_UNQUOTED(DAR_VERSION, "AC_PACKAGE_VERSION", [dar and dar_suite version, definition in configure.ac]) AM_INIT_AUTOMAKE([subdir-objects]) AM_GNU_GETTEXT([external]) AM_GNU_GETTEXT_VERSION XGETTEXT_EXTRA_OPTIONS='--keyword=dar_gettext' AM_ICONV #### # configure checks what is available from the operational system: # - it displays things on output for the user running the configure script has status information # - it sets some shell variable that are not used outside the configuration script # - it sets some shell variable that can be susbstitued in Makefile.in files (see AC_SUBST() and AC_CONFIG_FILES()) # also known as "output variables" # - it defines macros that get stored in config.h and used in source code (see AC_DEFINE()) # # header files: # header files are #included in source code if they HAVE_... macro has been defined in config.h # # libraries: # necessary library flags are stored in the "output variables" LIBS substituable shell variable and passed to Makefile.in # that get substitued in Makefiles.in when the @LIBS@ form is met. # # have a specific variable for pkgconfig, setting the default value: AC_SUBST(pkgconfigdir, [${libdir}/pkgconfig]) AC_ARG_WITH([pkgconfigdir], AS_HELP_STRING(--with-pkgconfigdir=DIR, [defines an alternative directory to install pkconfig files, default is '${libdir}/pkgconfig']), [ if [ ! -z "$withval" ] ; then AC_SUBST(pkgconfigdir, $withval) fi ], [] ) # Checks for programs. AC_PROG_CXX AC_PROG_CC AC_PROG_LIBTOOL AC_PROG_MAKE_SET AC_PROG_RANLIB AC_MSG_CHECKING([for C++ compiler usability]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], [ class test { public: int test; }; ])], [AC_MSG_RESULT(ok)], [AC_MSG_ERROR([No C++ compiler found])]) # Defining _XOPEN_SOURCE to get extra field in struct stat AC_DEFINE(_XOPEN_SOURCE, 700, [activates POSIX.1-2008 symbols in order to allow microsecond time setting, as well as ctime_r() call]) # Define _BSD_SOURCE in order to be able to call makedev(), minor() and major() under OpenBSD when _XOPEN_SOURCE is set AC_DEFINE(_BSD_SOURCE, 1, [activate makedev(), major() and minor() when _XOPEN_SOURCE is set]) AC_DEFINE(_DEFAULT_SOURCE, 1, [disabling warning about _BSD_SOURCE to be deprecated]) ########### ## THE FOLLOWING "DEFINE" USED TO RE-ENABLE FULL LIBC FEATURES ON DIFFERENT OPERATING SYSTEMS ## HAVE BEEN BORROWED FROM PYTHON's configure.in ## ## # The later defininition of _XOPEN_SOURCE disables certain features # on Linux, so we need _GNU_SOURCE to re-enable them (makedev, tm_zone). AC_DEFINE(_GNU_SOURCE, 1, [Define on Linux to activate all library features]) # The later defininition of _XOPEN_SOURCE and _POSIX_C_SOURCE disables # certain features on NetBSD, so we need _NETBSD_SOURCE to re-enable # them. AC_DEFINE(_NETBSD_SOURCE, 1, [Define on NetBSD to activate all library features]) # The later defininition of _XOPEN_SOURCE and _POSIX_C_SOURCE disables # certain features on FreeBSD, so we need __BSD_VISIBLE to re-enable # them. AC_DEFINE(__BSD_VISIBLE, 1, [Define on FreeBSD to activate all library features]) # The later defininition of _XOPEN_SOURCE and _POSIX_C_SOURCE disables # certain features on Mac OS X, so we need _DARWIN_C_SOURCE to re-enable # them. AC_DEFINE(_DARWIN_C_SOURCE, 1, [Define on Darwin to activate all library features]) ## ## ########### # Checks for libraries. AC_CHECK_LIB(socket, [socket], [], []) AC_CHECK_LIB(nsl, [endnetconfig], [], []) AC_CHECK_LIB(cap, [cap_get_proc], [], []) AC_ARG_ENABLE( [libdl-linking], AS_HELP_STRING(--disable-libdl-linking, [ignore any libdl and avoid linking against it]), [ AS_IF([test "x$enable_libdl_linking" != "xno"], [AC_MSG_ERROR([invalid argument given to --disable-libdl-linking])] ) ], [ AC_CHECK_LIB(dl, [dlsym], [], []) ]) # Checks for header files. AC_HEADER_DIRENT AC_HEADER_STDC AC_HEADER_SYS_WAIT AC_CHECK_HEADERS([fcntl.h netinet/in.h arpa/inet.h stdint.h stdlib.h string.h sys/ioctl.h sys/socket.h termios.h unistd.h utime.h sys/types.h signal.h errno.h sys/un.h sys/stat.h time.h fnmatch.h regex.h pwd.h grp.h stdio.h pthread.h ctype.h getopt.h limits.h stddef.h sys/utsname.h libintl.h sys/capability.h linux/capability.h utimes.h sys/time.h wchar.h wctype.h stddef.h]) AC_SYS_LARGEFILE # Checks for typedefs, structures, and compiler characteristics. AC_C_CONST AC_C_INLINE AC_TYPE_OFF_T AC_TYPE_PID_T AC_TYPE_SIZE_T AC_CHECK_MEMBERS([struct stat.st_rdev]) AC_DECL_SYS_SIGLIST AC_CHECK_TYPE(size_t, [AC_CHECK_SIZEOF(size_t)], [AC_MSG_ERROR([Cannot find size_t type])], []) AC_CHECK_TYPE(time_t, [AC_CHECK_SIZEOF(time_t)], [AC_MSG_ERROR([Cannot find time_t type])], []) AC_CHECK_TYPE(off_t, [AC_CHECK_SIZEOF(off_t)], [AC_MSG_ERROR([Cannot find off_t type])], []) # Checks for library functions. AC_FUNC_FNMATCH AC_FUNC_FORK AC_PROG_GCC_TRADITIONAL AC_FUNC_LSTAT AC_HEADER_MAJOR AC_FUNC_MALLOC AC_TYPE_SIGNAL AC_FUNC_STAT AC_FUNC_UTIME_NULL AC_HEADER_TIME AC_CHECK_FUNCS([lchown mkdir regcomp rmdir strerror_r utime fdopendir readdir_r ctime_r getgrnam_r getpwnam_r localtime_r]) AC_MSG_CHECKING([for c++14 support]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([ #include ], [ thread_local static int test = 0; std::unique_ptr x = std::make_unique(0); ]) ], [ AC_MSG_RESULT(yes) ], [ AC_MSG_RESULT([no]) AC_MSG_CHECKING([for c++14 support with -std=c++14 option set]) CXXSTDFLAGS="-std=c++14" CXXFLAGS="$CXXFLAGS $CXXSTDFLAGS" AC_COMPILE_IFELSE([AC_LANG_PROGRAM([ #include ], [ thread_local static int test = 0; std::unique_ptr x = std::make_unique(0); ]) ], [ AC_MSG_RESULT(yes) ], [ AC_MSG_RESULT(no) AC_MSG_ERROR([C++ compiler lack support for c++14 standard]) ] ) ]) AC_MSG_CHECKING([for sed -r/-E option]) if sed -r -e 's/(c|o)+/\1/g' > /dev/null < /dev/null ; then local_sed="gnu" AC_MSG_RESULT([GNU sed, using -r option for regex]) else if sed -E -e 's/(c|o)+/\1/g' > /dev/null < /dev/null ; then local_sed="bsd" AC_MSG_RESULT([BSD sed, using -E option for regex]) else local_sed=unknown AC_MSG_ERROR([unknown switch to use with sed to support regex]) fi fi AC_MSG_CHECKING([for getopt() in ]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_UNISTD_H #include #endif }]], [ getopt(0, 0, 0); ]) ], [ AC_DEFINE(HAVE_GETOPT_IN_UNISTD_H, 1, [a getopt() call is declared in ]) AC_MSG_RESULT(present) ], [AC_MSG_RESULT(absent)]) AC_MSG_CHECKING([for getopt_long() in ]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_UNISTD_H #include #endif }]], [ getopt_long(0, 0, 0, 0, 0); ]) ], [ AC_DEFINE(HAVE_GETOPT_LONG_IN_UNISTD_H, 1, [a getopt_long() call is declared in ]) AC_MSG_RESULT(present) ], [AC_MSG_RESULT(absent)]) AC_MSG_CHECKING([for optreset presence]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_STDIO_H #include #endif #if HAVE_GETOPT_H #include #else #if HAVE_UNISTD_H #include #endif #endif }]], [ int x = optreset; return 0; ]) ], [ AC_DEFINE(HAVE_OPTRESET, 1, [the optreset external variable exists to reset getopt standard call]) AC_MSG_RESULT(available) ], [AC_MSG_RESULT([not available])]) AC_MSG_CHECKING([for Door file support]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_STAT_H #include #endif #if HAVE_UNISTD_H #include #endif }]], [ struct stat buf; if(S_ISDOOR(buf.st_mode)) return 0; else return 1; ]) ], [ AC_DEFINE(HAVE_DOOR, 1, [whether the system has the necessary routine to handle Door files]) AC_MSG_RESULT(available) ], [AC_MSG_RESULT([not available])]) AC_MSG_CHECKING([for POSIX.1e capabilities support]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_CAPABILITY_H #include #else #if HAVE_LINUX_CAPABILITY_H #include #endif #endif #if HAVE_SYS_TYPES_H #include #endif }]], [ cap_t capaset = cap_get_proc(); (void)cap_free((void *)capset); ]) ], [ AC_DEFINE(HAVE_CAPABILITIES, 1, [whether the system has support for POSIX.1e capabilities]) AC_MSG_RESULT(available) ], [ AC_MSG_RESULT([not available]) ]) AC_MSG_CHECKING([for fdatasync() availability]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_UNISTD_H #include #endif }]], [ (void)fdatasync(0); ]) ], [ AC_DEFINE(HAVE_FDATASYNC, 1, [whether the system provides fdatasync() system call]) AC_MSG_RESULT(available) ], [ AC_MSG_RESULT([not available]) ]) AC_MSG_CHECKING([for syncfs() availability]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_UNISTD_H #include #endif }]], [ (void)syncfs(0); ]) ], [ AC_DEFINE(HAVE_SYNCFS, 1, [whether the system provides syncfs() system call]) AC_MSG_RESULT(available) ], [ AC_MSG_RESULT([not available]) ]) local_time_accuracy_second=0; local_time_accuracy_microsecond=6 local_time_accuracy_nanosecond=9 AC_DEFINE_UNQUOTED(LIBDAR_TIME_ACCURACY_SECOND, $local_time_accuracy_second, [value for time accuracy representing an accuracy of 1 second]) AC_DEFINE_UNQUOTED(LIBDAR_TIME_ACCURACY_MICROSECOND, $local_time_accuracy_microsecond, [value for time accuracy representing an accuracy of 1 microsecond]) AC_DEFINE_UNQUOTED(LIBDAR_TIME_ACCURACY_NANOSECOND, $local_time_accuracy_nanosecond, [value for time accuracy representing an accuracy of 1 nanosecond]) AC_ARG_ENABLE( [limit-time-accuracy], AS_HELP_STRING(--enable-limit-time-accuracy, [limit time accuracy to nanosecond (ns) microsecond (us) or second (s)]), [ AS_IF( [ test "x$enable_limit_time_accuracy" != "xs" -a "x$enable_limit_time_accuracy" != "xus" -a "x$enable_limit_time_accuracy" != "xns" ], [ AC_MSG_ERROR([invalid argument given to --enable-limit-time-accuracy, valide values are : s, us, ns])], [ AC_MSG_WARN([limiting time accuracy to $enable_limit_time_accuracy])] ) ], [ enable_limit_time_accuracy=no ]) AC_MSG_CHECKING([for the timestamps write precision]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_FCNTL_H #include #endif #if HAVE_SYS_STAT_H #include #endif }]], [[ struct timespec a[2]; a[0].tv_sec = 0; a[1].tv_nsec = 1; (void)utimensat(0, "/tmp/testfile", a, 0); /* not that this test program is only compiled+lined not executed */ ]]) ], [ AS_IF([ test "x$enable_limit_time_accuracy" = "xs" -o "x$enable_limit_time_accuracy" = "xus" ], [ local_time_write_accuracy=-1 ], [ local_time_write_accuracy=$local_time_accuracy_nanosecond AC_MSG_RESULT([1 nanosecond]) ]) ], [ local_time_write_accuracy=-1 ]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_TYPES_H #include #endif #if HAVE_SYS_STAT_H #include #endif #if HAVE_UNISTD_H #include #endif #if HAVE_UTIMES_H #include #endif #if HAVE_SYS_TIME_H #include #endif }]], [[ struct timeval tv[2]; tv[0].tv_sec = 1000; tv[1].tv_usec = 2000; (void)utimes("/tmp/testfile.tmp", tv); /* note that this test program is only compiled+linked not run */ return 0; ]]) ], [ AS_IF([ test $local_time_write_accuracy -eq -1 ], AS_IF([ test "x$enable_limit_time_accuracy" != "xs" ], [ local_time_write_accuracy=$local_time_accuracy_microsecond AC_MSG_RESULT([1 microsecond]) ], [ local_time_write_accuracy=$local_time_accuracy_second AC_MSG_RESULT([1 second]) ]) ) ], [ AS_IF([ test $local_time_write_accuracy -eq -1 ], [ local_time_write_accuracy=$local_time_accuracy_second AC_MSG_RESULT([1 second]) ]) ]) AC_DEFINE_UNQUOTED(LIBDAR_TIME_WRITE_ACCURACY, $local_time_write_accuracy, [timestamps write accuracy]) AC_MSG_CHECKING([for the timestamps read precision]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_TYPES_H #include #endif #if HAVE_SYS_STAT_H #include #endif #if HAVE_UNISTD_H #include #endif }]], [[ struct stat st; if(st.st_atim.tv_nsec != 0) return 0; else return 1; /* whatever, this test program is only compiled no run */ ]]) ], [ AS_IF([ test "x$enable_limit_time_accuracy" = "xus" ], [ local_time_read_accuracy=$local_time_accuracy_microsecond AC_MSG_RESULT([1 microsecond]) ], [ AS_IF([ test "x$enable_limit_time_accuracy" = "xs" ], [ local_time_read_accuracy=$local_time_accuracy_second AC_MSG_RESULT([1 second]) ], [ local_time_read_accuracy=$local_time_accuracy_nanosecond AC_MSG_RESULT([1 nanosecond]) ]) ]) ], [ local_time_read_accuracy=$local_time_accuracy_second AC_MSG_RESULT([1 second]) ]) AC_DEFINE_UNQUOTED(LIBDAR_TIME_READ_ACCURACY, $local_time_read_accuracy, [timestamps read accuracy]) AC_MSG_CHECKING([for lutimes() availability]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_TIME_H #include #endif } // extern "C" ]], [[ struct timeval tv[2]; int lu = lutimes("/tmp/noway", tv); ]]) ], [ AC_DEFINE(HAVE_LUTIMES, 1, [if lutimes() system call is available]) local_lutimes=yes AC_MSG_RESULT(available) ], [ AC_MSG_RESULT([not available])] ) AC_MSG_CHECKING([for strerror_r flavor]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_STRING_H #include #endif } // externe "C" ]], [[ char *ptr = strerror_r(0, 0, 0); ]]) ], [ AC_DEFINE(HAVE_STRERROR_R_CHAR_PTR, 1, [strerror_r() returns a char *]) AC_MSG_RESULT([GNU specific]) ], [ AC_MSG_RESULT([XSI compliant]) ]) # DAR's features AC_ARG_ENABLE( [linux-statx], AC_HELP_STRING(--disable-linux-statx, [ignore linux's statx() system call and do not save birthtime of files under Linux because it is only possible to set and thus restore it under BSD systems like MACOS X, no under Linux today]), [explicit_linux_statx=yes], [enable_linux_statx=yes]) AS_IF( [test "x$enable_linux_statx" != "xyes"], [ AC_MSG_WARN([Linux statx() system call not used if present]) local_statx="no" ], [ AC_MSG_CHECKING([for linux's statx() availability]) AC_RUN_IFELSE([AC_LANG_PROGRAM( [[extern "C" { #if HAVE_SYS_TYPES_H #include #endif #if HAVE_SYS_STAT_H #include #endif #if HAVE_UNISTD_H #include #endif #if HAVE_FCNTL_H #include #endif }]], [ struct statx val; int ret = statx(0, "/", 0, STATX_BTIME, &val); if(ret != 0) return 1; /* will not use statx */ else return 0; /* possible to go further with statx */ ]) ], [ AC_DEFINE(HAVE_STATX_SYSCALL, 1, [system provides statx() system call, will be used to get inode birthtime on Linux]) local_statx=yes ], [ local_statx=no ], [ # if cross compiling AC_LINK_IFELSE( [AC_LANG_PROGRAM( [[extern "C" { #if HAVE_SYS_TYPES_H #include #endif #if HAVE_SYS_STAT_H #include #endif #if HAVE_UNISTD_H #include #endif #if HAVE_FCNTL_H #include #endif }]], [ struct statx val; (void) statx(0, "/", 0, STATX_BTIME, &val); ]) ], [ AC_DEFINE(HAVE_STATX_SYSCALL, 1, [system provides statx() system call, will be used to get inode birthtime on Linux]) local_statx=yes ], [ local_statx=no ]) ] ) ]) AC_ARG_ENABLE( [libz-linking], AS_HELP_STRING(--disable-libz-linking, [disable linking with libz and disable libz compression support]), [explicit_libz_linking=yes], [enable_libz_linking=yes]) AS_IF( [test "x$enable_libz_linking" != "xyes"], [ AC_MSG_WARN([libz compression support has been disabled by user]) local_libz="no" ], [ AC_CHECK_LIB(z, [deflate], [], [AC_MSG_WARN([library zlib not found])]) AC_CHECK_HEADER(zlib.h, [local_libz="yes" AC_DEFINE(HAVE_ZLIB_H, 1, [zlib.h header file is available]) ], [AC_MSG_WARN([Cannot find zlib.h header file]) local_libz="no" ]) if test "$local_libz" = "yes" ; then AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_ZLIB_H #include #endif }]], [[ z_stream *ptr = (z_stream *)0; deflate(ptr, 0); ]]) ], [ AC_DEFINE(LIBZ_AVAILABLE, 1, [header and linking is available to have libz functions])], [ local_libz="no" ]) else AC_MSG_WARN([libz compression support not available]) fi AS_IF( [ test "x$explicit_libz_linking" = "xyes" -a "$local_libz" != "yes" ], [ AC_MSG_ERROR([libz linking failed]) ] ) ] ) AC_ARG_ENABLE([libbz2-linking], AS_HELP_STRING(--disable-libbz2-linking, [disable linking with libbz2 and disables libbz2 compression support]), [explicit_libbz2_linking=yes], [enable_libbz2_linking=yes]) AS_IF( [test "x$enable_libbz2_linking" != "xyes"], [ AC_MSG_WARN([libbz2 compression support has been disabled by user]) local_libbz2="no" ], [ AC_CHECK_LIB(bz2, [BZ2_bzCompress], [], [AC_MSG_WARN([library libbz2 not found])]) AC_CHECK_HEADER(bzlib.h, [local_libbz2="yes" AC_DEFINE(HAVE_BZLIB_H, 1, [bzlib.h header file is available]) ], [AC_MSG_WARN([Cannot find bzlib.h header file]) local_libbz2="no" ]) if test "$local_libbz2" = "yes" ; then AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_BZLIB_H #include #endif }]], [[ bz_stream *ptr = (bz_stream *)0; BZ2_bzCompress(ptr, 0); ]]) ], [ AC_DEFINE(LIBBZ2_AVAILABLE, 1, [header and linking is available to have libbz2 functions])], [ local_libbz2="no" ]) else AC_MSG_WARN([libbz2 compression support not available]) fi AS_IF( [ test "x$explicit_libbz2_linking" = "xyes" -a "$local_libbz2" != "yes" ], [ AC_MSG_ERROR([libbz2 linking failed]) ] ) ] ) AC_ARG_ENABLE( [liblzo2-linking], AS_HELP_STRING(--disable-liblzo2-linking, [disable linking with liblzo2 and disables lzo compression support]), [explicit_liblzo2_linking=yes], [enable_liblzo2_linking=yes] ) AS_IF( [test "x$enable_liblzo2_linking" != "xyes"], [ AC_MSG_WARN([lzo compression support has been disabled by user]) local_liblzo2="no" ], [ AC_CHECK_LIB(lzo2, [lzo1x_1_compress], [], [AC_MSG_WARN([library liblzo2 not found])]) AC_CHECK_HEADER(lzo/lzo1x.h, [local_liblzo2="yes" AC_DEFINE(HAVE_LZO_LZO1X_H, 1, [lzo/lzo1x.h header file is available]) ], [AC_MSG_WARN([Cannot find lzo/lzo1x.h header file]) local_liblzo2="no" ]) if test "$local_liblzo2" = "yes" ; then AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_LZO_LZO1X_H #include #endif }]], [[ (void)lzo1x_1_compress(0, 0, 0, 0, 0); ]]) ], [ AC_DEFINE(LIBLZO2_AVAILABLE, 1, [header and linking is available to have lzo functions])], [ local_liblzo2="no" ]) else AC_MSG_WARN([lzo compression support not available]) fi AS_IF( [ test "x$explicit_liblzo2_linking" = "xyes" -a "$local_liblzo2" != "yes" ], [ AC_MSG_ERROR([liblzo linking failed]) ] ) ] ) AC_ARG_ENABLE( [libxz-linking], AS_HELP_STRING(--disable-libxz-linking, [disable linking with libxz/liblzma and disable xz compression support]), [explicit_libxz_linking=yes], [enable_libxz_linking=yes] ) AS_IF( [test "x$enable_libxz_linking" != "xyes"], AC_MSG_WARN([libxz compression support has been disabled by user]) local_libxz="no" ], [ AC_CHECK_LIB(lzma, [lzma_code], [], [AC_MSG_WARN([library liblzma not found])]) AC_CHECK_HEADER(lzma.h, [local_libxz="yes" AC_DEFINE(HAVE_LZMA_H, 1, [lzma.h header file is available]) ], [AC_MSG_WARN([Cannot find lzma.h header file]) local_libxz="no" ]) if test "$local_libxz" = "yes" ; then AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_LZMA_H #include #endif }]], [[ lzma_stream ptr = LZMA_STREAM_INIT; lzma_ret tmp = lzma_easy_encoder(&ptr, 2, LZMA_CHECK_CRC32); ]]) ], [ AC_DEFINE(LIBLZMA_AVAILABLE, 1, [header and linking is available to have liblzma functions])], [ local_libxz="no" ]) else AC_MSG_WARN([libxz compression support not available]) fi AS_IF( [ test "x$explicit_libxz_linking" = "xyes" -a "$local_libxz" != "yes" ], [ AC_MSG_ERROR([libxz linking failed]) ] ) ] ) AC_ARG_ENABLE( [libzstd-linking], AS_HELP_STRING(--disable-libzstd-linking, [disable linking with libzstd and disable zstd compression support]), [explicit_libzstd_linking=yes], [enable_libzstd_linking=yes] ) AS_IF( [test "x$enable_libzstd_linking" != "xyes"], [ AC_MSG_WARN([libzstd compression support has been disabled by user]) local_libzstd="no" ], [ AC_CHECK_LIB(zstd, [ZSTD_createCStream], [], [AC_MSG_WARN([library libzstd not found])]) AC_CHECK_HEADER(zstd.h, [local_libzstd="yes" AC_DEFINE(HAVE_ZSTD_H, 1, [zstd.h header file is available]) ], [AC_MSG_WARN([Cannot find zstd.h header file]) local_libzstd="no" ]) if test "$local_libzstd" = "yes" ; then min_maj_version_zstd=1 min_min_version_zstd=3 AC_DEFINE_UNQUOTED(MIN_MAJ_VERSION_ZSTD, "$min_maj_version_zstd", [libzstd minimum major version]) AC_DEFINE_UNQUOTED(MIN_MIN_VERSION_ZSTD, "$min_min_version_zstd", [libzstd minimum minor version]) AC_RUN_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_ZSTD_H #include #endif #if HAVE_STDLIB_H #include #endif }]], [[ unsigned int min_version = atoi(MIN_MAJ_VERSION_ZSTD)*100*100 + atoi(MIN_MIN_VERSION_ZSTD)*100 + 0; if(ZSTD_versionNumber() < min_version) return 1; ZSTD_CStream *zs = ZSTD_createCStream(); (void)ZSTD_freeCStream(zs); return 0; ]]) ], [ AC_DEFINE(LIBZSTD_AVAILABLE, 1, [header and linking is available to have libzstd functions])], [ local_libzstd="no" ], [ # if cross compiling AC_LINK_IFELSE([AC_LANG_PROGRAM( [[ extern "C" { #if HAVE_ZSTD_H #include #endif #if HAVE_STDLIB_H #include #endif }]], [[ ZSTD_CStream *zs = ZSTD_createCStream(); (void)ZSTD_versionNumber(); (void)ZSTD_freeCStream(zs); return 0; ]]) ], [ AC_DEFINE(LIBZSTD_AVAILABLE, 1, [header and linking is available to have libzstd functions])], [ local_libzstd="no" ] ) ] ) else AC_MSG_WARN([libzstd compression support not available]) fi AS_IF( [ test "x$explicit_libzstd_linking" = "xyes" -a "$local_libzstd" != "yes" ], [ AC_MSG_ERROR([libzstd linking failed]) ] ) ] ) AC_ARG_ENABLE( [liblz4-linking], AC_HELP_STRING(--disable-liblz4-linking, [disable linking with liblz4 and disable lz4 compression support]), [explicit_liblz4_linking=yes], [enable_liblz4_linking=yes] ) AS_IF( [ test "x$enable_liblz4_linking" != "xyes" ], [ AC_MSG_WARN([liblz4 compression support has been disabled by user]) local_liblz4="no" ], [ AC_CHECK_LIB(lz4, [LZ4_decompress_safe], [], [AC_MSG_WARN([library liblz4 not found])]) AC_CHECK_HEADER(lz4.h, [ local_liblz4="yes" AC_DEFINE(HAVE_LZ4_H, 1, [lz4.h header file is available]) ], [ AC_MSG_WARN([Cannot find lz4.h header file]) local_liblz4="no" ]) if test "$local_liblz4" = "yes" ; then AC_RUN_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_LZ4_H #include #endif #if HAVE_STRING_H #include #endif #if HAVE_STDLIB_H #include #endif #if HAVE_STDIO_H #include #endif } ]], [[ const char *src = "Quand on lui marche sur les pieds, le serpent hausse les epaules."; unsigned int src_sz = strlen(src); unsigned int zip_sz = LZ4_compressBound(src_sz); char *zip = (char *)malloc(zip_sz); char *unzip = (char *)malloc(src_sz + 1); if(zip == 0 || unzip == 0) return 1; /* allocation problem */ zip_sz = LZ4_compress_default(src, zip, src_sz, zip_sz); if(zip_sz <= 0) return 1; /* compression problem */ if(LZ4_decompress_safe(zip, unzip, zip_sz, src_sz) != src_sz) return 1; /* decompression problem */ unzip[src_sz] = '\0'; if(strncmp(src, unzip, src_sz) != 0) return 1; /* decompression result does not match the original data */ free(zip); free(unzip); return 0; ]]) ], [ AC_DEFINE(LIBLZ4_AVAILABLE, 1, [headar and linking is available to have liblz4 fonctions]) ], [ local_liblz4="no" ], [ #if cross compiling AC_LINK_IFELSE([AC_LANG_PROGRAM( [[ extern "C" { #if HAVE_LZ4_H #include #endif #if HAVE_STRING_H #include #endif #if HAVE_STDLIB_H #include #endif #if HAVE_STDIO_H #include #endif } ]], [[ /* the following is not expected to be executed just compiled and linked */ const char *src = "Quand on lui marche sur les pieds, le serpent hausse les epaules."; unsigned int src_sz = strlen(src); unsigned int zip_sz = LZ4_compressBound(src_sz); char *zip = (char *)malloc(zip_sz); char *unzip = (char *)malloc(src_sz + 1); zip_sz = LZ4_compress_default(src, zip, src_sz, zip_sz); (void)LZ4_decompress_safe(zip, unzip, zip_sz, src_sz); return 0; ]]) ], [ AC_DEFINE(LIBLZ4_AVAILABLE, 1, [headar and linking is available to have liblz4 fonctions]) ], [ local_liblz4="no" ]) ] ) else AC_MSG_WARN([liblz4 compression support not available]) fi AS_IF( [ test "x$explicit_liblz4_linking" = "xyes" -a "$local_liblz4" != "yes" ], [ AC_MSG_ERROR([liblz4 linking failed]) ] ) ] ) AC_ARG_ENABLE( [libgcrypt-linking], AS_HELP_STRING(--disable-libgcrypt-linking, [disable linking with libgcrypt which disables strong encryption support]), [explicit_libgcrypt_linking=yes], [enable_libgcrypt_linking=yes]) AS_IF( [test "x$enable_libgcrypt_linking" != "xyes"], [ AC_MSG_WARN([strong encryption support has been disabled by user]) local_crypto="no" ], [ AC_CHECK_LIB(gpg-error, [gpg_err_init], [], []) AC_CHECK_LIB(gcrypt, [gcry_check_version], [], []) AC_CHECK_HEADER(gcrypt.h, [local_crypto="yes" AC_DEFINE(HAVE_GCRYPT_H, 1, [gcrypt.h header file is available]) ], [AC_MSG_WARN([Cannt find gcrypt.h header file]) local_crypto="no" ]) if test "$local_crypto" = "yes" ; then min_version_gcrypt="1.4.0" AC_DEFINE_UNQUOTED(MIN_VERSION_GCRYPT, "$min_version_gcrypt", [libgcrypt minimum version]) min_version_gcrypt_hash_bug="1.6.0" AC_DEFINE_UNQUOTED(MIN_VERSION_GCRYPT_HASH_BUG, "$min_version_gcrypt_hash_bug", [ligcrypt minimum version without hash bug]) AC_MSG_CHECKING([for libgcrypt usability]) AC_RUN_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_GCRYPT_H #include #endif } #include using namespace std; ]], [[ if(!gcry_check_version(MIN_VERSION_GCRYPT)) { cout << "ligcrypt version too low, minimum version is " << MIN_VERSION_GCRYPT << endl; exit(1); } else exit(0); ]]) ], [ AC_DEFINE(CRYPTO_AVAILABLE, 1, [header and linking is available to have strong encryption works]) AC_MSG_RESULT([ok]) AC_RUN_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_GCRYPT_H #include #endif } ]], [[ if(!gcry_check_version(MIN_VERSION_GCRYPT_HASH_BUG)) exit(1); else exit(0); ]]) ], [], [ libgcrypt_hash_bug="yes" ]) ], [ if test "$?" = "1" ; then AC_MSG_RESULT([failed: need libgcypt >= $min_version_gcrypt, disabling strong encryption support]) else AC_MSG_RESULT([failed: libgcrypt is unusable, cannot even call gcry_check_version(). Disabling strong encryption support]) fi local_crypto="no" ], [ # if cross compiling AC_LINK_IFELSE([AC_LANG_PROGRAM( [[ extern "C" { #if HAVE_GCRYPT_H #include #endif } #include using namespace std; ]], [[ (void)gcry_check_version(MIN_VERSION_GCRYPT); ]]) ], [ AC_DEFINE(CRYPTO_AVAILABLE, 1, [header and linking is available to have strong encryption works]) AC_MSG_RESULT([ok]) # libgcrypt_hash_bug="yes" # we assume with time version with that bug will vanish ], []) ]) else AC_MSG_WARN([strong encryption support not available]) fi AS_IF( [ test "x$explicit_libgcrypt_linking" = "xyes" -a "$local_crypto" != "yes" ], [ AC_MSG_ERROR([ligcrypt linking failed]) ] ) ] ) AC_ARG_ENABLE( [ea-support], AS_HELP_STRING(--disable-ea-support,[disable Extended Attributes support]), [explicit_ea_support=yes], [enable_ea_support=yes]) AS_IF( [test "x$enable_ea_support" != "xyes"], [ AC_MSG_CHECKING([for Extended Attribute support]) AC_MSG_RESULT([disabled]) local_ea_support="no" ], [ AC_CHECK_HEADERS([attr/xattr.h sys/xattr.h]) AC_CHECK_LIB(attr, [lgetxattr], [], []) AC_MSG_CHECKING([for Unix Extended Attribute support]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_TYPES_H #include #endif #if HAVE_ATTR_XATTR_H && ! HAVE_SYS_XATTR_H #include #endif #if HAVE_SYS_XATTR_H #include #endif }]], [ lgetxattr((char *)0, (char *)0, (void *)0, 0); ]) ], [ AC_DEFINE(EA_SUPPORT, [], [if defined, activates support for Extended Attributes]) local_ea_support="yes" AC_MSG_RESULT([yes]) ], [ AC_MSG_RESULT([no]) AC_CHECK_HEADERS([sys/xattr.h]) AC_CHECK_LIB(c, [fgetxattr]) AC_MSG_CHECKING([for Mac OS X Extended Attribute support]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_SYS_XATTR_H #include #endif }]], [ getxattr((char *)0, (char *)0, (void *)0, 0, 0, XATTR_NOFOLLOW); ]) ], [ AC_DEFINE(EA_SUPPORT, [], [if defined, activates support for Extended Attributes]) AC_DEFINE(OSX_EA_SUPPORT, [], [if defined, activates support for Mac OS X Extended Attributes]) local_ea_support="yes" AC_MSG_RESULT([yes]) ], [ local_ea_support="no" AC_MSG_RESULT([no]) ]) ]) AS_IF( [ test "x$explicit_ea_support" = "xyes" -a "$local_ea_support" != "yes" ], [ AC_MSG_ERROR([Failed finding Extended Attribute support]) ] ) ] ) AC_MSG_CHECKING([ext2fs.h availability]) AC_ARG_ENABLE( [nodump-flag], AS_HELP_STRING(--disable-nodump-flag, [disable the ext2/3/4 Filesystem Specific Attribute support, in particular the --nodump feature]), [explicit_nodump_flag=yes], [enable_nodump_flag=yes]) AS_IF( [test "x$enable_nodump_flag" != "xyes"], [AC_MSG_RESULT([extX FSA disabled])], [AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #include #if HAVE_SYS_IOCTL_H #include #endif }]],[[int fd, f; ioctl(fd, EXT2_IOC_GETFLAGS, &f);]] ) ], [ AC_DEFINE(LIBDAR_NODUMP_FEATURE, [NODUMP_EXT2FS], [if defined, activates the ext2/3 nodump flag feature]) local_nodump_feature="yes" AC_MSG_RESULT([found ]) ], [ AC_LINK_IFELSE( [AC_LANG_PROGRAM([[extern "C" { #include #if HAVE_SYS_IOCTL_H #include #endif }]],[[int fd, f; ioctl(fd, EXT2_IOC_GETFLAGS, &f);]]) ], [ AC_DEFINE(LIBDAR_NODUMP_FEATURE, [NODUMP_LINUX], [if defined, activates the ext2/3 nodump flag feature]) local_nodump_feature="yes" AC_MSG_RESULT([found ]) ], [ AC_MSG_RESULT([NOT FOUND]) local_nodump_feature="no" AC_MSG_WARN([cannot find ext2_fs.h header file, nodump-flag and extX FSA features will not be available]) ]) ]) AS_IF( [ test "x$explicit_nodump_flag" = "xyes" -a "$local_nodump_feature" != "yes" ], [ AC_MSG_ERROR([Failed activating nodump feature]) ] ) ] ) AC_MSG_CHECKING([birth time availability]) AC_ARG_ENABLE( [birthtime], AS_HELP_STRING(--disable-birthtime, [disable the HFS+ Filesystem Specific Attribute support]), [explicit_birthtime=yes], [enable_birthtime=yes]) AS_IF( [test "x$enable_birthtime" != "xyes"], [AC_MSG_RESULT([hfs+ FSA disabled])], [AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_SYS_TYPE_H #include #endif #if HAVE_SYS_STAT_H #include #endif #if HAVE_UNISTD_H #include #endif }]], [[ struct stat tmp; int ret = stat("/", &tmp); time_t birth = tmp.st_birthtime; ]]) ], [ AC_DEFINE(LIBDAR_BIRTHTIME, 1, [if defined, activates the support for HFS+ create time FSA]) local_birthtime="yes" AC_MSG_RESULT([found]) ], [ AC_MSG_RESULT([NOT FOUND]) AC_MSG_WARN([Cannot find support for birthtime, HFS+ FSA support will not be available]) ]) AS_IF( [ test "x$explicit_birthtime" = "xyes" -a "$local_birthtime" != "yes" ], [ AC_MSG_ERROR([birth time support not available]) ] ) ] ) AC_ARG_ENABLE( [gnugetopt], AS_HELP_STRING(--disable-gnugetopt, [avoid linking with libgnugetopt]), [explicit_gnugetopt=yes], [enable_gnugetopt=yes]) AS_IF( [test "x$enable_gnugetopt" != "xyes"], [], AC_CHECK_LIB(gnugetopt, [getopt_long], [], [ AS_IF( [ test "x$explicit_gnugetopt" = "xyes"], [ AC_MSG_ERROR([gnugetopt linking failed]) ] ) ]) ) AC_ARG_ENABLE( [librsync-linking], AS_HELP_STRING(--disable-librsync-linking, [disable linking with librsync and disable delta compression support]), [explicit_librsync_linking=yes], [enable_librsync_linking=yes]) AS_IF( [test "x$enable_librsync_linking" != "xyes"], [ AC_MSG_WARN([librsync delta-compression support has been disabled by user]) local_librsync="no" ], [ AC_CHECK_LIB(rsync, [rs_strerror], [], [AC_MSG_WARN([librsync library not found])]) AC_CHECK_HEADER(librsync.h, [local_librsync="yes" AC_DEFINE(HAVE_LIBRSYNC_H, 1, [librsync.h header file is available]) ], [AC_MSG_WARN([Cannot find librsync.h header file]) local_librsync="no" ]) if test "$local_librsync" = "yes" ; then AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_LIBRSYNC_H #include #include #endif }]], [[ rs_result err = RS_DONE; (void) rs_strerror(err); ]]) ], [ AC_DEFINE(LIBRSYNC_AVAILABLE, 1, [librsync is usable])], [ local_librsync="no" ]) else AC_MSG_WARN([librsync compression support not available]) fi AS_IF( [ test "x$explicit_librsync_linking" = "xyes" -a "$local_librsync" != "yes" ], [ AC_MSG_ERROR([librsync linking failed]) ] ) ] ) AC_ARG_ENABLE( [libcurl-linking], AS_HELP_STRING(--disable-libcurl-linking, [ignore libcurl and avoid linking against it]), [explicit_libcurl_linking=yes], [enable_libcurl_linking=yes]) AS_IF( [test "x$enable_libcurl_linking" != "xyes"], [ AC_MSG_WARN([libcurl and thus remote repository support has been disabled by user]) local_libcurl="no" ], [ PKG_CHECK_EXISTS(libcurl, [ PKG_CHECK_MODULES(LIBCURL, libcurl, [], [AC_MSG_ERROR([libcurl not found, but reported to exist !?!])]) AC_DEFINE(HAVE_LIBCURL, 1, [Libcurl library availability]) ], [ AC_CHECK_LIB(curl, [curl_global_init], [], [AC_MSG_WARN([libcurl library not found])]) AC_DEFINE(HAVE_LIBCURL, 1, [Libcurl library availability]) ]) CPPFLAGS__cache="$CPPFLAGS" CPPFLAGS="$LIBCURL_CFLAGS $CPPFLAGS" LIBS___cache="$LIBS" LIBS="$LIBCURL_LIBS $LIBS" AC_CHECK_HEADER(curl/curl.h, [ local_libcurl="yes" AC_DEFINE(HAVE_CURL_CURL_H, 1, [libcurl.h header file is available]) ], [ AC_MSG_WARN([Cannot find libcurl.h header file]) local_libcurl="no" ]) if test "$local_libcurl" = "yes" ; then AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_CURL_CURL_H #include #endif } ]], [[ (void) curl_global_init(CURL_GLOBAL_ALL); ]]) ], [AC_DEFINE(LIBCURL_AVAILABLE, 1, [libcurl is usable])], [ local_libcurl="no" ]) else AC_MSG_WARN([remote repository support not available]) fi CPPFLAGS="$CPPFLAGS___cache" LIBS="$LIBS___cache" unset CPPFLAGS___cache unset LIBS___cache AS_IF( [ test "x$explicit_libcurl_linking" = "xyes" -a "$local_libcurl" != "yes" ], [ AC_MSG_ERROR([libcurl linking failed]) ], [ CPPFLAGS="$LIBCURL_CFLAGS $CPPFLAGS" LIBS="$LIBCURL_LIBS $LIBS" ] ) ] ) AC_ARG_ENABLE( [fadvise], AS_HELP_STRING(--disable-fadvise, [avoid using fadvise(2) system call]), [explicit_fadvise=yes], [enable_fadvise=yes]) AS_IF( [test "x$enable_fadvise" != "xyes"], [ AC_MSG_WARN([avoiding the use of fadvise(2) system call, per user request])], [ AC_MSG_CHECKING([for posix_fadvise support]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_FCNTL_H #include #endif }]], [ (void)posix_fadvise(0,0,1,POSIX_FADV_NORMAL); ]) ], [ AC_DEFINE(HAVE_POSIX_FADVISE, 1, [whether the system has support for posix_fadvise()]) local_posix_fadvise="yes" AC_MSG_RESULT(available) ], [ AC_MSG_RESULT([not available]) AS_IF( [ test "x$explicit_fadvise" = "xyes" ], [ AC_MSG_ERROR([fadvise not available]) ] ) ] ) ] ) AC_MSG_CHECKING([for getopt() availability]); AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_STDIO_H #include #endif #if HAVE_GETOPT_H #include #else #if HAVE_UNISTD_H #include #endif #endif }]], [ getopt(0, 0, 0); ]) ], [ AC_MSG_RESULT([ok]) ], [AC_MSG_ERROR([absent but required])]) AC_MSG_CHECKING([for getopt_long() availability]); AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_STDIO_H #include #endif #if HAVE_GETOPT_H #include #else #if HAVE_UNISTD_H #include #endif #endif }]], [ getopt_long(0, 0, 0, 0, 0); ]) ], [ local_have_getopt_long="yes" AC_DEFINE(HAVE_GETOPT_LONG, 1, [whether getopt_long() is available]) AC_MSG_RESULT([available]) ], [AC_MSG_RESULT([NOT AVAILABLE])]) AC_ARG_ENABLE( [examples], AS_HELP_STRING(--enable-examples, [buld example and testing programs]), [ AS_IF([ test "x$enable_examples" != "xyes" ], [ AC_MSG_ERROR([invalid argument given to --enable-examples]) ] ) examples="yes" ], [examples="false"]) AC_ARG_ENABLE( [os-bits], AS_HELP_STRING(--enable-os-bits=arg, [arg is 32 or 64. If for some reason, one wants to overcome detected system value]), [AC_DEFINE_UNQUOTED(OS_BITS, $enableval, [manually set CPU's registers' size])], [ AC_CHECK_HEADER(inttypes.h, [ AC_COMPILE_IFELSE( [AC_LANG_PROGRAM([extern "C" { #include }], [ uint16_t a = 0; uint32_t b = 0; uint64_t c = 0; int16_t d = 0; int32_t e = 0; int64_t f = 0; return a+b+c+d+e+f; ]) ], [ AC_DEFINE(HAVE_INTTYPES_H, 1, inttypes.h header availability) ], [ AC_MSG_ERROR([Cannot find *int*_t type declarations in headerfile, --enable-os-bits=... option must be used]) ]) ], [ AC_MSG_ERROR([Cannot find inttypes.h headerfile, --enable-os-bits=... option must be used]) ] ) ] ) AC_ARG_ENABLE( [mode], AS_HELP_STRING(--enable-mode=arg, [where arg is either 32 or infinint. Makes dar internally use 32 bits or limitless integers in place of 64 bits integers (which is the case if this option is not given)]), [ build_mode=$enableval if test "$build_mode" != "32" -a "$build_mode" != "64" -a "$build_mode" != "infinint" ; then AC_MSG_ERROR([Invalid argument given to --enable-mode option]) fi ], [build_mode=64]) AC_ARG_ENABLE( [furtive-read], AS_HELP_STRING(--disable-furtive-read, [Ignore furtive read mode availability on systems that support it]), [explicit_furtive_read=yes], [enable_furtive_read=yes]) AS_IF( [test "x$enable_furtive_read" != "xyes"], [ local_furtive_read_mode="no" AC_MSG_WARN([Furtive read mode disabled]) ], [ AC_MSG_CHECKING([furtive read mode availability]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_SYS_TYPE_H #include #endif #if HAVE_SYS_STAT_H #include #endif #if HAVE_FCNTL_H #include #endif #if HAVE_DIRENT_H #include #endif } ]], [[ int x = O_NOATIME; int fd = open("/",O_RDONLY|O_NOATIME); #if HAVE_FDOPENDIR (void)fdopendir(fd); #else syntaxically incorrect statement here to force compilation to fail! #endif ]]) ], [ AC_DEFINE(FURTIVE_READ_MODE_AVAILABLE, 1, [furtive read mode is available]) AC_MSG_RESULT(available) local_furtive_read_mode="yes" ], [ AC_MSG_RESULT(no available) local_furtive_read_mode="no" ]) AS_IF( [ test "x$explicit_furtive_read" = "xyes" -a "$local_furtive_read_mode" != "yes" ], [ AC_MSG_ERROR([furtive read mode not available]) ] ) ] ) AC_ARG_ENABLE( [debug], AS_HELP_STRING(--enable-debug, [build targets with debugging option and no optimization]), [ AS_IF([ test "x$enable_debug" != "xyes" ], [ AC_MSG_ERROR([invalid argument given to --enable-debug]) ] ) CXXFLAGS="-g -Wall" CFLAGS="-g -Wall" LDFLAGS="-g -Wall" debug_static="yes" AC_DEFINE(LIBDAR_NO_OPTIMIZATION, 1, [if defined, informs the code that no optimization has been used for compilation]) ], [ debug_static="no" ]) AC_ARG_ENABLE( [pedantic], AS_HELP_STRING(--enable-pedantic, [enable pedantic syntaxical check at compilation, use only for debugging purposes !]), [ AS_IF([ test "x$enable_pedantic" != "xyes" ], [ AC_MSG_ERROR([invalid argument given to --enable-pedantic]) ] ) CXXFLAGS="$CXXFLAGS -pedantic -Wno-long-long" ], []) AC_ARG_ENABLE( [build-html], AS_HELP_STRING(--disable-build-html, [don't build programming documentation (in particular libdar API documentation) and html man page]), [explicit_build_html=yes], [enable_build_html=yes]) AS_IF( [ test "x$enable_build_html" != "xyes"], [ AC_MSG_WARN([documentation no built per user request]) doxygen="no" groff="no" ], [ AC_CHECK_PROG(doxygen, doxygen, [yes], [no], [$PATH]) AC_MSG_CHECKING([for doxygen version]) if test "$doxygen" = "yes" ; then n1=`doxygen --version | cut -d '.' -f 1` n2=`doxygen --version | cut -d '.' -f 2` if test $n1 -gt 1 -o $n2 -ge 3 ; then AC_MSG_RESULT([ >= 1.3]) else AC_MSG_RESULT([ too old (< 1.3) ignoring doxygen]) doxygen="no" fi fi AC_CHECK_PROG(dot, dot, [YES], [NO], [$PATH]) # upper case value for dot variable because it goes as is into doxyfile file AC_CHECK_PROG(tmp, man, [yes], [no], [$PATH]) if test "$tmp" = "yes" ; then AC_CHECK_PROG(groff, groff, [yes], [no], [$PATH]) else groff = "no"; fi AS_IF( [ test "x$explicit_build_html" = "xyes" -a \( "$doxygen" != "yes" -o "$groff" != "yes" \) ], [ AC_MSG_ERROR([lacking prerequisit to build documentation]) ] ) ] ) AC_ARG_ENABLE( [upx], AS_HELP_STRING(--disable-upx, [by default configure looks for UPX and if available make executables compressed at installation time, you can disable this feature]), [explicit_upx=yes], [enable_upx=yes]) AS_IF( [ test "x$enable_upx" != "xyes" ], [ AC_MSG_NOTICE([ignoring UPX]) upx="no" ], [ AC_CHECK_PROG(upx, upx, [yes], [no], [$PATH]) AS_IF( [ test "x$explicit_upx" = "xyes" -a "$upx" != "yes" ], [ AC_MSG_ERROR([upx is missing]) ] ) ] ) AC_ARG_ENABLE( [fast-dir], AS_HELP_STRING(--disable-fast-dir, [disable optimization for large directories, doing so has a little positive impact on memory requirement but a huge drawback on execution time]), [ AS_IF([ test "x$enable_fast_dir" != "xno" ], [ AC_MSG_ERROR([invalid argument given to --disable-fast_dir]) ] ) ], [AC_DEFINE(LIBDAR_FAST_DIR, 1, [activation of speed optimization for large directories]) local_fast_dir="yes" ] ) AC_ARG_ENABLE( [gpgme-linking], AS_HELP_STRING(--disable-gpgme-linking, [disable linking with gpgme which disables asymetric crypto algorithms]), [explicit_gpgme_linking=yes], [enable_gpgme_linking=yes]) AS_IF( [ test "x$enable_gpgme_linking" != "xyes" ], [ AC_MSG_WARN([asymetrical encryption support has been disabled by user]) local_gpgme="no" ], [ if test $local_crypto != no ; then gpgme_min_version="1.2.0" AC_DEFINE_UNQUOTED(GPGME_MIN_VERSION, "$gpgme_min_version", [minimum version expected of GPGME]) m4_ifdef([AM_PATH_GPGME], [ AM_PATH_GPGME($gpgme_min_version, [ CPPFLAGS___cache="$CPPFLAGS" CPPFLAGS="$GPGME_CFLAGS $CPPFLAGS" LIBS___cache="$LIBS" LIBS="$GPGME_LIBS $LIBS" AC_CHECK_HEADERS([gpgme.h]) AC_CHECK_LIB(gpgme, [gpgme_signers_add], [], []) AC_MSG_CHECKING([for libgpgme usability]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[ #if HAVE_GPGME_H #include #endif ]], [[ gpgme_ctx_t context; gpgme_error_t err = gpgme_new(&context); gpgme_release(context); return err; ]]) ], [ local_gpgme="yes" AC_DEFINE(GPGME_SUPPORT, 1, [GPGME is available to support public key based ciphering]) AC_MSG_RESULT(ok) ], [ local_gpgme="no" AC_MSG_RESULT([not usable! See config.log for details]) ]) CPPFLAGS="$CPPFLAGS___cache" unset CPPFLAGS___cache LIBS="$LIBS___cache" unset LIBS___cache ], [ AC_MSG_WARN([Public key support (GPGME linking) requires version greater than $gpgme_min_version]) ] ) ], [AC_MSG_WARN([AM_PATH_GPGME macro not found!])]) else AC_MSG_WARN([Public key support (GPGME linking) requires libgcrypt (strong encryption support)]) fi AS_IF( [ test "x$explicit_gpgme_linking" = "xyes" -a "$local_gpgme" != "yes" ], [ AC_MSG_ERROR([gpgme linking failed]) ], [ CPPFLAGS="$GPGME_CFLAGS $CPPFLAGS" LIBS="$GPGME_LIBS $LIBS" ] ) ] ) AC_ARG_ENABLE( [thread-safe], AS_HELP_STRING(--disable-thread-safe, [libdar is thread safe if POSIX mutex are available, you can manually disable the use of POSIX mutex, the resulting libdar library will not be thread-safe anymore]), [explicit_thread_safe=yes], [enable_thread_safe=yes]) AS_IF( [ test "x$enable_thread_safe" != "xyes" ], [ AC_MSG_NOTICE([thread-safe support disabled])], [ AC_CHECK_LIB(pthread, [pthread_mutex_init], [], []) AC_MSG_CHECKING([for POSIX mutex]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_PTHREAD_H #include #endif }]], [[ pthread_mutex_t mutex; pthread_mutex_init(&mutex, (const pthread_mutexattr_t*)0); pthread_mutex_lock(&mutex); pthread_mutex_unlock(&mutex);]]) ], [ AC_DEFINE(MUTEX_WORKS, 1, [POSIX mutex (pthread_mutex_t) is available]) local_mutex_works="yes" AC_MSG_RESULT(yes) ], [ AC_MSG_RESULT(no)]) AC_MSG_CHECKING([for reentrant stdlib calls]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_TIME_H #include #endif #if HAVE_SYS_TYPE_H #include #endif #if HAVE_GRP_H #include #endif #if HAVE_PWD_H #include #endif #if HAVE_DIRENT_H #include #endif }]], [[ #if HAVE_CTIME_R char *val1 = ctime_r(0, 0); #else error(); // should not compile as expected #endif #if HAVE_GETGRNAM_R int val2 = getgrnam_r(0, 0, 0, 0, 0); #else error(); // should not compile as expected #endif #if HAVE_GETPWNAM_R int val3 = getpwnam_r(0, 0, 0, 0, 0); #else error(); // should not compile as expected #endif #if HAVE_LOCALTIME_R struct tm *val4 = localtime_r(0, 0); #else error(); // should not compile as expected #endif #if HAVE_READDIR_R int val5 = readdir_r(0, 0, 0); #else error(); // should not compile as expected #endif ]]) ], [AC_MSG_RESULT([all could be found])], [ AC_DEFINE(MISSING_REENTRANT_LIBCALL, 1, [Some *_r() stdlib call are missing to permit complete thread-safe support by libdar]) local_mutex="no" AC_MSG_RESULT([some are missing]) ] ) AS_IF( [ test "x$explicit_thread_safe" = "xyes" -a "$local_mutex" != "yes" ], [ AC_MSG_ERROR([thread safe support not available]) ] ) ] ) AC_ARG_ENABLE( [execinfo], AS_HELP_STRING(--disable-execinfo, [disable reporting stack information on self diagnostic bugs even]), [explicit_execinfo=yes], [enable_execinfo=yes]) AS_IF( [ test "x$enable_execinfo" != "xyes" ], [ AC_MSG_WARN([ignoring execinfo even if available]) ], [ AC_CHECK_HEADERS([execinfo.h]) AC_CHECK_LIB(execinfo, [backtrace], [], []) AC_MSG_CHECKING([for backtrace() usability]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[extern "C" { #if HAVE_EXECINFO_H #include #endif }]], [[ const int buf_size = 20; void *buffer[buf_size]; int x = backtrace(buffer, buf_size); ]]) ], [ AC_DEFINE(BACKTRACE_AVAILABLE, 1, [backtrace() call supported]) AC_MSG_RESULT(yes) ], [ AC_MSG_RESULT(no) AS_IF( [ test "x$explicit_execinfo" = "xyes" ], [ AC_MSG_ERROR([execinfo not found]) ] ) ]) ] ) AC_ARG_ENABLE( [profiling], AS_HELP_STRING(--enable-profiling, [enable executable profiling]), [ AS_IF([ test "x$enable_profiling" != "xyes" ], [ AC_MSG_ERROR([invalid argument given to --enable-profiling]) ] ) profiling="yes" ]) AC_ARG_ENABLE( [debug-memory], AS_HELP_STRING(--enable-debug-memory, [log memory allocations and releases to /tmp/dar_debug_mem_allocation.txt this debugging option lead to a slow executable]), [ AS_IF([ test "x$enable_debug_memory" != "xyes" ], [ AC_MSG_ERROR([invalid argument given to --enable-debug-memory]) ] ) AC_DEFINE(LIBDAR_DEBUG_MEMORY, 1, [if defined, builds a very slow executable ])]) AC_ARG_ENABLE( [dar-static], AS_HELP_STRING(--disable-dar-static, [avoids building dar_static, a dar statically linked version]), [ explicit_dar_static=yes AS_IF([ test "x$enable_dar_static" != "xyes" ], [ build_static="no" ], [ build_static="yes" ]) ], [build_static="yes"]) AC_ARG_ENABLE( [threadar], AS_HELP_STRING(--disable-threadar, [avoid linking with libthreadar if available to prevent the use several threads inside libdar]), [explicit_threadar=yes], [enable_threadar=yes]) AS_IF( [ test "x$enable_threadar" != "xyes" ], [ AC_MSG_WARN([libthreadar support has been disabled by user]) ], [ PKG_CHECK_EXISTS(libthreadar, [ PKG_CHECK_MODULES(LIBTHREADAR, libthreadar, [], [AC_MSG_ERROR([libthreadar not found, but reported to exist !?!])]) ], [ # for libthreadar before release 1.5.1; AC_CHECK_LIB(threadar, [for_autoconf], [], []) # for libthreadar since release 1.5.1: AC_CHECK_LIB(threadar, [libthreadar_for_autoconf], [], []) ]) CPPFLAGS__cache="$CPPFLAGS" CPPFLAGS="$LIBTHREADAR_CFLAGS $CPPFLAGS" CXXFLAGS__cache="$CXXFLAGS" CXXFLAGS="$LIBTHREADAR_CFLAGS $CXXFLAGS" LIBS__cache="$LIBS" LIBS="$LIBTHREADAR_LIBS $LIBS" AC_CHECK_HEADER(libthreadar/libthreadar.hpp, [ AC_DEFINE(HAVE_LIBTHREADAR_LIBTHREADAR_HPP, 1, [libthreadar.h header file availability]) ], [ AC_MSG_WARN([Cannot find libthreadar.h header file]) ] ) AC_MSG_CHECKING([for libthreadar operationability]) exp_maj_version_threadar=1 min_med_version_threadar=3 min_min_version_threadar=1 AC_DEFINE_UNQUOTED(EXPECTED_MAJ_VERSION_THREADAR, "$exp_maj_version_threadar", [libthreadar expected major version]) AC_DEFINE_UNQUOTED(MIN_MED_VERSION_THREADAR, "$min_med_version_threadar", [libthreadar minimal medium version]) AC_DEFINE_UNQUOTED(MIN_MIN_VERSION_THREADAR, "$min_min_version_threadar", [libthreadar minimal minor version]) AC_RUN_IFELSE([AC_LANG_PROGRAM( [[ #if HAVE_LIBTHREADAR_LIBTHREADAR_HPP #include #endif #include ]], [[ class mythread: public libthreadar::thread { public: mythread(int x): myx(x) {}; int getx() const { return myx; }; protected: virtual void inherited_run() { --myx; }; private: int myx; }; unsigned int maj, med, min; libthreadar::get_version(maj, med, min); if(maj != atoi(EXPECTED_MAJ_VERSION_THREADAR) || med < atoi(MIN_MED_VERSION_THREADAR) || (med == atoi(MIN_MED_VERSION_THREADAR) && min < atoi(MIN_MIN_VERSION_THREADAR))) { std::cout << "libthreadar version " << maj << "." << med << "." << min << " is too old, use at least version " << EXPECTED_MAJ_VERSION_THREADAR << "." << MIN_MED_VERSION_THREADAR << "." << MIN_MIN_VERSION_THREADAR << std::endl; return 1; } mythread toto(10); toto.run(); toto.join(); toto.getx(); std::cout << "ok" << std::endl; return 0; ]]) ], [ local_threadar=yes AC_DEFINE(LIBTHREADAR_AVAILABLE, 1, [when libthreadar could be found and linked against]) AC_MSG_RESULT(fine) ], [ local_threadar=no AC_MSG_RESULT(wrong) ], [ # if cross compiling AC_LINK_IFELSE( [AC_LANG_PROGRAM( [[ #if HAVE_LIBTHREADAR_LIBTHREADAR_HPP #include #endif #include ]], [[ class mythread: public libthreadar::thread { public: mythread(int x): myx(x) {}; int getx() const { return myx; }; protected: virtual void inherited_run() { --myx; }; private: int myx; }; mythread toto(10); toto.run(); toto.join(); toto.getx(); std::cout << "ok" << std::endl; ]]) ], [ local_threadar=yes AC_DEFINE(LIBTHREADAR_AVAILABLE, 1, [when libthreadar could be found and linked against]) AC_MSG_RESULT(fine) ], [ local_threadar=no AC_MSG_RESULT(wrong) ]) ] ) CPPFLAGS="$CPPFLAGS__cache" unset CPPFLAGS__cache CXXFLAGS="$CXXFLAGS__cache" unset CXXFLAGS__cache LIBS="$LIBS__cache" unset LIBS__cache AS_IF( [ test "x$explicit_threadar" = "xyes" -a "$local_threadar" != "yes" ], [ AC_MSG_ERROR([libthreadar linking failed]) ], [ CPPFLAGS="$LIBTHREADAR_CFLAGS $CPPFLAGS" CXXFLAGS="$LIBTHREADAR_CFLAGS $CXXFLAGS" LIBS="$LIBTHREADAR_LIBS $LIBS" ] ) ] ) AS_IF( [ test "$local_threadar" = "yes" ], [ AC_RUN_IFELSE( [ AC_LANG_PROGRAM( [[ #if HAVE_LIBTHREADAR_LIBTHREADAR_HPP #include #endif ]], [[ unsigned int maj, med, min; libthreadar::get_version(maj, med, min); if(libthreadar::barrier::used_implementation() == "pthread_barrier_t") return 0; else return 1; ]]) ], [ local_threadar_barrier_mac=yes AC_DEFINE(LIBTHREADAR_BARRIER_MAC, 1, [libthreadar can emulate barrier feature on MacOS]) ], [ local_threadar_barrier_mac=no ] ) ] [ local_threadar_barrier_mac=no ] ) AC_ARG_ENABLE([libargon2-linking], AC_HELP_STRING(--disable-libargon2-linking, [avoid linking with libargon2 if available to prevent the argon2 hashing algorithm to be used]), [explicit_libargon2_linking=yes], [enable_libargon2_linking=yes]) AS_IF( [ test "x$enable_libargon2_linking" != "xyes" ], [ AC_MSG_WARN([libargon2 support has been disabled by user]) local_argon2=no ], [ AC_CHECK_HEADER(argon2.h, [ AC_DEFINE(HAVE_ARGON2_H, 1, [argon2.h header file availability]) ], [ AC_MSG_WARN([Cannot find argon2.h header file]) ] ) AC_CHECK_LIB(argon2, [argon2id_hash_raw], [], [AC_MSG_WARN([library libargon2 not found])]) AC_MSG_CHECKING([for libargon2 usability]) AC_RUN_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_ARGON2_H #include #endif #if HAVE_STRING_H #include #endif } ]], [[ char pass[] = "mot de passe"; char salt[] = "sel fin de cuisine"; constexpr unsigned int hash_size = 40; char hash[hash_size]; if(argon2id_hash_raw(2000, 100, 1, pass, strlen(pass), salt, strlen(salt), hash, hash_size) != ARGON2_OK) return 1; else return 0; ]]) ], [ local_argon2=yes AC_DEFINE(LIBARGON2_AVAILABLE, 1, [when libargon2 could be found and linked against]) AC_MSG_RESULT(ok) ], [ local_argon2=no AC_MSG_RESULT([not usable! See config.log for details]) ], [ # if cross compiling AC_LINK_IFELSE([AC_LANG_PROGRAM( [[ extern "C" { #if HAVE_ARGON2_H #include #endif #if HAVE_STRING_H #include #endif } ]], [[ char pass[] = "mot de passe"; char salt[] = "sel fin de cuisine"; constexpr unsigned int hash_size = 40; char hash[hash_size]; if(argon2id_hash_raw(2000, 100, 1, pass, strlen(pass), salt, strlen(salt), hash, hash_size) != ARGON2_OK) return 1; else return 0; ]]) ], [ local_argon2=yes AC_DEFINE(LIBARGON2_AVAILABLE, 1, [when libargon2 could be found and linked against]) AC_MSG_RESULT(ok) ], [ local_argon2=no AC_MSG_RESULT([not usable! See config.log for details]) ]) ] ) AS_IF( [ test "x$explicit_libargon2_linking" = "xyes" -a $local_argon2 != "yes" ], [ AC_MSG_ERROR([libargon2 is missing to build libdar]) ] ) ] ) AC_MSG_CHECKING([static linking]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[ extern "C" { #if HAVE_STDIO_H #include #endif #if HAVE_EXECINFO_H #include #endif #if HAVE_STDLIB_H #include #endif #if HAVE_PTHREAD_H #include #endif #if HAVE_LIBRSYNC_H #include #endif #if HAVE_PTHREAD_H #include #endif #if HAVE_LIBRSYNC_H #include #include #endif #if HAVE_ZLIB_H #include #endif #if HAVE_BZLIB_H #include #endif #if HAVE_LZO_LZO1X_H #include #endif #if HAVE_LZMA_H #include #endif #if HAVE_GCRYPT_H #include #endif #if HAVE_CURL_CURL_H #include #endif #if HAVE_GPGME_H #include #endif } ]], [[ #if BACKTRACE_AVAILABLE const int buf_size = 20; void *buffer[buf_size]; int size = backtrace(buffer, buf_size); char **symbols = backtrace_symbols(buffer, size); if(symbols != 0) free(symbols); printf("testing execinfo info in static linked mode..."); #endif #if MUTEX_WORKS if(1) { pthread_mutex_t test; if(pthread_mutex_init(&test, NULL) == 0) { if(pthread_mutex_lock(&test) == 0) pthread_mutex_unlock(&test); } pthread_mutex_destroy(&test); printf("testing mutex availability in static linked mode..."); } #endif #if LIBRSYNC_AVAILABLE if(1) { rs_result err = RS_DONE; (void) rs_strerror(err); printf("testing librsync availability in static linked mode..."); } #endif #if LIBZ_AVAILABLE if(1) { z_stream *ptr = (z_stream *)0; deflate(ptr, 0); printf("testing libz availability in static linked mode..."); } #endif #if LIBBZ2_AVAILABLE if(1) { bz_stream *ptr = (bz_stream *)0; BZ2_bzCompress(ptr, 0); printf("testing libbz2 availability in static linked mode..."); } #endif #if LIBLZO2_AVAILABLE if(1) { int x; printf("testing liblzo2 availability in static linked mode..."); x = lzo1x_1_compress(0, 0, 0, 0, 0); } #endif #if LIBLZMA_AVAILABLE if(1) { lzma_stream ptr = LZMA_STREAM_INIT; lzma_ret tmp = lzma_easy_encoder(&ptr, 2, LZMA_CHECK_CRC32); printf("testing libxz/lzma availability in static linked mode..."); } #endif #if CRYPTO_AVAILABLE printf("testing gcrypt availability in static linked mode..."); if(!gcry_check_version(MIN_VERSION_GCRYPT)) { printf("ligcrypt version too low"); exit(1); } else exit(0); #endif #if LIBCURL_AVAILABLE printf("testing librsync availability in static linked mode..."); (void) curl_global_init(CURL_GLOBAL_ALL); #endif #if GPGME_SUPPORT if(1) { gpgme_ctx_t context; gpgme_error_t err = gpgme_new(&context); gpgme_release(context); } #endif return 0; ]]) ], [ AC_MSG_RESULT([yes, perfect!]) static_pb="no" ], [ AC_MSG_RESULT([failed]) static_pb="yes" AS_IF( [ test "x$explicit_dar_static" = "xyes" -a "$build_static" = "yes" ], [ AC_MSG_ERROR([Cannot build dar-static on this system, check config.log]) ] ) ]) AC_ARG_ENABLE( [python-binding], AS_HELP_STRING(--disable-python-binding, [ignore python binding even if it is possible to build it]), [explicit_python_binding=yes], [enable_python_binding=yes]) AS_IF( [ test "x$enable_python_binding" != "xyes" ], [ AC_MSG_WARN([python binding disabled per user request]) local_python="no" ], [ if test "x$enable_shared" != "xyes" -o "$debug_static" == "yes" ; then AC_MSG_WARN([Cannot build python binding without shared library support]) local_python="no" else AC_MSG_CHECKING([for python binding]) pyext="python3-config --extension-suffix" if $pyext 1> /dev/null 2> /dev/null ; then PYEXT="`$pyext`" else local_python="no" fi pyflags="python3 -m pybind11 --includes" if test "$local_python" != "no" && $pyflags 1> /dev/null 2> /dev/null ; then PYFLAGS="`$pyflags`" local_python="yes" AC_MSG_RESULT([ok]) AC_SUBST(PYEXT, [$PYEXT]) AC_SUBST(PYFLAGS, [$PYFLAGS]) else local_python="no" AC_MSG_RESULT([failed]) fi fi AS_IF( [ test "x$explicit_python_binding" = "xyes" -a "$local_python" != "yes" ], [ AC_MSG_ERROR([prerequisit for python binding not met]) ] ) ] ) AM_CONDITIONAL([MAKE_ALL_DIR], [test $examples = "yes"]) AM_CONDITIONAL([BUILD_DAR_STATIC], [test $build_static = "yes" -a $static_pb = "no"]) AM_CONDITIONAL([DEBUG_STATIC], [test $debug_static = "yes" -a $static_pb = "no"]) AM_CONDITIONAL([BUILD_MODE32], [test "$build_mode" = "32"]) AM_CONDITIONAL([BUILD_MODE64], [test "$build_mode" = "64" -o -z "$build_mode"]) AM_CONDITIONAL([USE_UPX], [test "$upx" = "yes"]) AM_CONDITIONAL([USE_DOXYGEN], [test "$doxygen" = "yes"]) AM_CONDITIONAL([USE_GROFF], [test "$groff" = "yes"]) AM_CONDITIONAL([PROFILING], [test "$profiling" = "yes"]) AM_CONDITIONAL([BSD_SED], [test "$local_sed" = "bsd"]) AM_CONDITIONAL([WITH_LIBTHREADAR], [test "$local_threadar" = "yes"]) AM_CONDITIONAL([PYTHON_BINDING], [test "$local_python" = "yes"]) AC_SUBST([CXXSTDFLAGS], [$CXXSTDFLAGS]) AC_SUBST(UPX_PROG, [upx]) AC_SUBST(DOXYGEN_PROG, [doxygen]) AC_SUBST(HAS_DOT, [$dot]) # defaults AC_PREFIX_DEFAULT(/usr/local) # hack from libtool mailing-list to know from source point of view whether we are compiling for dynamic or static way AC_CONFIG_COMMANDS([hack-libtool], [ sed 's,^pic_flag=,pic_flag=" -D__DYNAMIC__ ",' libtool > libtoolT \ && mv -f libtoolT libtool && chmod 755 libtool ]) AC_CONFIG_FILES([Makefile man/Makefile src/Makefile src/libdar/Makefile src/dar_suite/Makefile src/testing/Makefile src/examples/Makefile doc/Makefile doc/samples/Makefile misc/Makefile doc/mini-howto/Makefile src/libdar/libdar.pc.tmpl doc/man/Makefile src/check/Makefile src/python/Makefile po/Makefile.in]) AC_OUTPUT [echo "" echo "--" echo "dar and libdar have been successfully configured with the following parameters:" echo "" echo " LIBDAR parameters:" printf " Zlib compression (gzip) : " if [ "$local_libz" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Libbz2 compression (bzip2) : " if [ "$local_libbz2" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Liblzo2 compression (lzo) : " if [ "$local_liblzo2" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Liblxz compression (xz) : " if [ "$local_libxz" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Liblzstd compression (zstd): " if [ "$local_libzstd" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Liblz4 compression (lz4) : " if [ "$local_liblz4" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Strong encryption support : " if [ "$local_crypto" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Public key cipher support : " if [ "$local_gpgme" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Extended Attributes support: " if [ "$local_ea_support" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Large files support (> 2GB): " if [ ! -z "$ac_cv_sys_file_offset_bits" -o ! -z "$ac_cv_sys_large_files" ] ; then echo "YES" else echo "NO" fi printf " extX FSA / nodump support : " if [ "$local_nodump_feature" = "yes" ] ; then echo "YES" else echo "NO" fi printf " HFS+ FSA support : " if [ "$local_birthtime" = "yes" ] ; then echo "YES" else echo "NO" fi printf " statx() support : " if [ "$local_statx" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Integer size used : " if [ "$build_mode" = "infinint" ] ; then echo "infinint" else if [ -z "$build_mode" ] ; then build_mode=64 fi echo "$build_mode" fi printf " Thread safe support : " if [ "$local_mutex_works" = "yes" -a -z "$local_test_memory" ] ; then echo "YES" else echo "NO" fi printf " Furtive read mode : " if [ "$local_furtive_read_mode" = "yes" ]; then echo "YES" else echo "NO" fi printf " Large directory optim. : " if [ "$local_fast_dir" = "yes" ] ; then echo "YES" else echo "NO" fi printf " posix fadvise support : " if [ "$local_posix_fadvise" = "yes" ] ; then echo "YES" else echo "NO" fi printf " timepstamps write accuracy : " if [ $local_time_write_accuracy -eq $local_time_accuracy_nanosecond ] ; then echo "1 nanosecond" else if [ $local_time_write_accuracy -eq $local_time_accuracy_microsecond ] ; then echo "1 microsecond" else echo "1 second" fi fi printf " timestamps read accuracy : " if [ $local_time_read_accuracy -eq $local_time_accuracy_nanosecond ] ; then echo "1 nanosecond" else if [ $local_time_read_accuracy -eq $local_time_accuracy_microsecond ] ; then echo "1 microsecond" else echo "1 second" fi fi printf " can restore symlink dates : " if [ "$local_lutimes" = "yes" ] ; then echo "YES" else echo "NO" fi printf " can uses multiple threads : " if [ "$local_threadar" = "yes" ] ; then if [ "$local_threadar_barrier_mac" = "yes" ] ; then echo "YES (+ barrier implementations for MAC OS)" else echo "YES" fi else echo "NO" fi printf " Delta-compression support : " if [ "$local_librsync" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Remote repository support : " if [ "$local_libcurl" = "yes" -a "$local_threadar" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Argon2 hashing algorithm : " if [ "$local_argon2" = "yes" ] ; then echo "YES" else echo "NO" fi echo "" echo " DAR SUITE command line programs:" printf " Long options available : " if [ "$local_have_getopt_long" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Building examples : " if [ "$examples" = "yes" ] ; then echo "YES" else echo "NO" fi printf " Building dar_static : " if [ "$build_static" = "yes" ]; then if [ "$static_pb" = "yes" ]; then echo "NO (system does not support static linking, see note below)" else echo "YES" fi else echo "NO" fi printf " using upx at install : " if [ "$upx" = "yes" ] ; then echo "YES" else echo "NO" fi printf " building documentation : " if [ "$doxygen" = "yes" ] ; then echo "YES" else echo "NO" fi printf " building python binding: " if [ "$local_python" = "yes" ] ; then echo "YES" else echo "NO" fi if [ "$static_pb" = "yes" -a "$build_static" = "yes" ] ; then echo "" echo " Note:" echo "" echo " If you want to know which libraries are not available as static" echo " libraries check the logs in the config.log generated file. the command" echo "" echo " 'grep -e -static -A 2 config.log'" echo "" echo " should bring you to the essentials." echo "" echo "You also might want to speed up the compilation process running ./configure" echo "with the --disable-static option" echo "" fi if [ -z "$build_mode" ] ; then echo "" echo "--------------------- N O T E -------------------------------------------" echo "Note: You are about to build a libdar/dar binary relying on \"infinint\"" echo "integer type. You shall also consider using 64 bits integers (whatever" echo "your CPU has 32 or 64 bits registers) for better performances and reduced" echo "memory requirements, at the cost of the limitations explained here:" echo " ./doc/Limitations.html (Paragraph about Integers)" echo "Document which is also available online at:" echo " http://dar.linux.free.fr/doc/Limitations.html#Integers" echo "Unless you are impacted by these limitations, you can rerun ./configure" echo "adding the option --enable-mode=64 for dar/libdar better performances" echo "-------------------------------------------------------------------------" fi if [ "$libgcrypt_hash_bug" = "yes" ] ; then echo "" echo "" echo "#################### W A R N I N G ######################################" echo "" echo "libgcrypt version is lower than $min_version_gcrypt_hash_bug and has a bug" echo "concerning hash calculation for large files. Expect sha1 and md5 hash" echo "results for slices larger than 256 Gio (gibioctet) to be incorrect." echo "" echo "#################### W A R N I N G ######################################" fi echo "" ] dar-2.7.17/NEWS0000644000175000017520000000075114767500073010014 00000000000000 For Changes brought by each new version, please read the Changelog file in this directory. If accessing source code from GIT, this file is located at src/build/ChangeLog For events concerning dar, you can check: the dar-news mailing list archives: http://lists.sourceforge.net/lists/listinfo/dar-news Here is the home page of the dar-news mailing list if you wish to subscribe to it (less than 10 emails a year): http://lists.sourceforge.net/lists/listinfo/dar-news dar-2.7.17/ChangeLog0000444000175000017520000033737614767510000011074 00000000000000from 2.7.16 to 2.7.17 - fixed bug where -R path ending with // was breaking the path filtering mechanism (-P/-g/-[/-] options). - added FAQ about HOME variable under Windows (John Slattery's feedback) - fixed bug where repairing failed with sliced backups - making exceptions thrown from shell_interaction class get their error message passed and reported to the user. from 2.7.15 to 2.7.16 - fixed mask building of path exclusion (dar's -P option) when used with regular expression (problem met while testing or merging a backup) - adding support for progressive report to repairing operation at API level - warning before processing to the backup if gnupg signatories are provided without any gnupg recipient. - fixing bug reporting the following message: /Subtracting an "infinint" greater than the first, "infinint" cannot be negative/. This was due to duplicated counter decrement while merging two archives and overwriting policy drives entry to be removed from the resulting archive adding to that, the very specific/rare condition where the number of removals exceeds more than the half of kept entries... - adding kdf support for repairing operation instead of using the values of the archive/backup under reparation. - fixing bug in thread_cancellation class that led a canceled thread kept being recorded as canceled forever, leading libdar to abort immediately when run in a new thread having the the same tid. - fixing bug in libdar leading an API call to return zero instead of the total size of the backup/archive (not use in dar CLI). - applying patch from Gentoo about the "which" command replacement in scripts - fixing some non-initialized variables as reported by cppcheck tool. from 2.7.14 to 2.7.15 - updating libdar about CURLINFO_CONTENT_LENGTH_DOWNLOAD symbol which is reported as deprecated by recent libcurl libraries. - fixed compilation problem under MacOS Mojave - fixed bug that lead the warning about a backup operation about to save itself, to not show - removing obsolete call to gcry_control(GCRYCTL_ENABLE_M_GUARD) while initializing libgcrypt. This led to libgcrypt initialization to fail with libgcrypt 1.11 and more recent versions. - fixed listing but about "present but unsaved" FSA status - fixed dead-lock condition in libdar when used with libcurl > 0.74.0 at the end of closing sftp session (undocumented changed behavior in libcurl). from 2.7.13 to 2.7.14 - adding safe guard in fichier_libcurl destructor to verify all data have been passed to libcurl *and* libcurl has completed the writing operation before destroying a fichier_libcurl object. - adding support for thread cancellation when interacting with libcurl - updating man page - fixing some error in the code documentation - updated FAQ from 2.7.12 to 2.7.13 - fixing bug auto-reported in "slice_layout.cpp line 48" concerning isolated catalogs triggered when dar/libdar has been compiled with some version gcc (gcc >= 11) with optimizations set. - fixing the configure script to not fail if libargon2 is not present unless explicitly asked by --enable-libargon2-linking - added support for cppcheck to reinforce code robustness and sanity - updating documentation about slice_layout structure and usage in libdar from 2.7.11 to 2.7.12 - fixed bug avoid restoration fo binary patch when it was based on another binary patch. Added workaround to read archive format 11.2 (used for releases 2.7.9 to 2.7.11). Archive format has been updated to 11.3 while 2.7.12 generated backup have the exact same format as from 11.2, this trick is only used to activate the workaround for badly generated backups - adding new condition for overwriting policy (E letter), used to improve the decremental backup when a file only change concerns its metadata (inode information, like permission or ownership) - fixing segmentation fault met when listing an archive content with -I option - fixing bug in testing routine that lead regression unseen and released in 2.7.11. from 2.7.10 to 2.7.11 - removed set_unexpected() invocation (not useful for years in libdar and incompatible with C++17) - fixed generated dynamic libdar binary to include all its dependent libraries - modified default block size for binary deltas, to be closer to what rsync uses (more details in man page). - Improved compression exclusions - adding support for ronna (10^27) and quetta (10^30) new SI prefixes, R and Q respectively - fixing bug in infinint class met when extending underlying storage by zero bytes - avoiding delta sig block size calculation when not necessary from 2.7.9 to 2.7.10 - displaying the slicing information about the archive of reference stored within a isolated catalogue when using -l and -q options - cleanup code from obsolete and unused readdir_r invocation - fixing display bug in dar_manager (shell_interaction class of libdar) - fixing python binding build system with the John Goerzen's proposal - replacing the deprecated PYBIND11_OVERLOAD_* by PYBIND11_OVERRIDE_* equivalents from 2.7.8 to 2.7.9 - added sanity check in elastic buffer code upon Sviat89@github feedback. - fixed bug in block_compressor module found by sviat89 while reading the code. Seen its context, it does not seem however to have much chance to express and would lead dar/libdar to loop forever consuming CPU. - adding the misc/dar_static_builder_with_musl_voidlinux.bash script which automatically builds dar_static from source code under VoidLinux/musl - fixing bug concerning the restoration in sequential read mode of a backup containing binary patches - fixed bug in tuyau_global class that lead to seeking at a wrong in sequential read mode and the unability to properly rely on a isolated catalogue to read (test/extract/diff) an backup in sequential read mode, leading dar to report CRC error. from 2.7.7 to 2.7.8 - adapted code to workaround clang 10.0.1 bug under FreeBSD 12 - updating code and man page about the way to solve the conflict of sequentially reading the archive of reference when binary delta is implicitly present for differential/incremental backups - added -avc option to surface libcurl verbose messages - fixed bug in dar where a sanity check about slice min digit detection was applied to the local filesystem when the backup was stored remotely, this prevented the reading or remote backups - exposing libcurl version to the version output (new API call added for that, leading upgrading libdar API to version 6.5.0) - remove extra slash (/) found after hostname in URL passed to libcurl - fixed self test reported error about mycurl_easyhandle_node.cpp line 176 - improved error message when libcurl fails to connect to an sftp server - fixed bug in libdar in the way libcurl is called for reading a file using ftp protocol - fixed bug in libdar when asking libcurl the size of the file we are writing (libcurl segfaults with ftp protocol). In addition, we now record this info during the write process (faster and more efficient). - fixed bug met when creating a backup on very close and/or high bandwidth ftp and sftp repos with the --hash option set, triggering a race condition that led dar to sometime hang unexpectedly. from 2.7.6 to 2.7.7 - added support for sequential reading more of sliced backup, to accommodate tape support used with slices (at the opposite of dar_split) - fixing few typos in doc - making libdar more tolerant when calls to fadvise fail from 2.7.5 to 2.7.6 - adding -f option to dar_cp - adding static version of dar_cp (dar_cp_static) as compilation outcome - added FAQ for tape usage with dar - fixing error in libdar header file installation - fixed bug met when interrupting the creation of a block compressed backup (always used by lzo compression and by other algorithm only when performing multi-threaded compression) - typo fixes in documentation - fixed message in lax mode used to obtain from the user the archive format when this information is corrupted in the archive. - fixing lax mode condition that popped up without being requested - fixing bug met when reading slice an special block device by mean of a symlink - adapting sanity checks to the case of a backup read from a special device in sequential-read mode. - fixed bug that lead dar to report CRC error while reading a backup from a pipe with the help of an isolated catalogue - adding -V option to dar_split (was using -v) for homogeneity with other commands from 2.7.4 to 2.7.5 - fixed double free error met when deciphering an archive with a wrong password/passphrase and when multi-threading is used. from 2.7.3 to 2.7.4 - fixed excessive context control that led libdar to report a bug when an file-system I/O error was returned by the operating system - fixed mini-digits auto-detection, which only worked when slice number 1 was present, even if subsequent slices could be used to detect its value - fixed typos and minor incoherence in documentation - update version information to display libthreadar barrier implementation used, info available since its release 1.4.0 from 2.7.2 to 2.7.3 - fixed bug met when restoring files in sequential-read mode and feeding the backup to dar on its stdin: When dar had to remove a file that had been deleted at the time of the backup since the backup of reference was made, dar aborted the restoration reporting it could not skip backward on a pipe. - adding call to kill() then join() in destructor of class slave_thread and fichier_libcurl to avoid the risk of SEGFAULT in rare conditions under Cygwin (Windows) - fixed several typos in bug, messages and comments - fixed script used to build windows binary to match recent cygwin dll - fix spelling and improved clarity in dar_split messages from 2.7.1 to 2.7.2 - fixed bug met when a user command returns an error while dar is saving fsa attributes of inodes (this conjunction make it an infrequent situation) - fixed typo in documentation - fixed remaining bug from 2.7.1 met when compiling dar/libdar with a compiler (clang-5.0 here) that requires a flag to activate C++14 support while complaining (for a good reason) when this flag is passed too while compiling C code. - fixed self reported bug escape.cpp line 858 met when using lzo compression or multi-threaded compression while creating a backup to stdout. - fixed bug met when creating a backup to stdout that lead libdar to corrupt data when trying to re-save a file uncompressed due to poor compression result - fixed minor bug met when setting --mincompr to zero and using block_compressor (lzo algo or any algo in block compression): empty files were reported as truncated due to the lack of block header (compressed empty file being stored as an empty file) from 2.7.0 to 2.7.1 - fixed compilation script to require support for C++14 due to new features introduces in 2.7.0 that rely on modern C++ constructions. Updated documentation accordingly about this updated requirement. - fixed missing included header for compilation to suceed under MacOS X - fixed typo in man page - adding minor feature: storing the backup root (-R option) in the archive to be able to restore 'in place' thanks to the new -ap option that sets the -R path to the one stored in the archive. - merging fixes an enhancements brought by release 2.6.15 from 2.6.x to 2.7.0 - using the truncate system call whenever possible in place of skipping back in the archive, when file need to re-save uncompressed, or when file has changed while it was read for backup - improved slice name versus base name error, now substituting (rather than just asking the user) the first by the later when else it would lead to an execution error. - auto-detecting min-digits at reading time in complement of the feature listed just above - added the possibility to specify a gnupg key by mean of its key-id in addition to the email address it may be associated to, both for asymmetrical encryption and archive signing. - added -b option to dar_split to set the I/O block size - added -r option to dar_split to limit the transfer rate - added -c option to dar_split to limit the number of tape to read - new feature: zstd compression algorithm added - replaced old and experimental multi-threaded implementation by production grade one. -G option may now receive an argument on command line for fine tuning. libdar API has been updated accordingly. - added multi-threaded compression when using the new per block compression mode, the legacy streaming compression mode is still available (see both -G and -z option extended syntax). - added lz4 compression algorithm. - removed some deprecated/duplicated old options (--gzip,...) - enhanced the delta signature calculation providing mean for the user to adapt the signature block size in regard to the file's size to delta-sig. "--delta sig" option received an extended syntax. - increased timestamp precision from microsecond to nanosecond when the operating system supports it. It is still possible to use configure --enable-limit-time-accuracy=us at compilation time to use microsecond or even second accuracy even if the OS can support finer precision. - added argon2 hashing algorithm for key derivation function (kdf) which becomes the default if libargon2 is available, else it defaults to sha1 as for 2.6.x. When argon2 is used, kdf default iteration count is reduced to 10,000 (and stays 200,000 with sha1). This can be tuned as usual with --kdf-param option. - adding support for statx() under Linux which let libdar save file's birthtime. Unfortunately unlike under BSD systems (FreeBSD, MACoS X, and so on), the utime/utimes/timensat call do not set birthtime, so you cannot restore birthtime on Linux's unlike under BSD systems - AES becomes the default when gnupg is used without additional algorithm. - new implementation of the libcurl API use for more efficient reuse of established sessions. - You can now exclude/include per filesystem rather than just sticking to the current filesystem or ignoring filesystem boundary. -M option can now receive arguments. - new feature: dar_manager can now change the compression algorithm of an existing database. - Added a benchmark of dar versus tar and rsync for different use cases. - documentation review, update, cleanup, restructured and beautification. from 2.6.15 to 2.6.16 - fixed bug met when restoring files in sequential-read mode and feeding the backup to dar on its stdin: When dar had to remove a file that had been deleted at the time of the backup since the backup of reference was made, dar aborted the restoration reporting it could not skip backward on a pipe. - adding call to kill() then join() in destructor of class slave_thread and fichier_libcurl to avoid the risk of SEGFAULT in rare conditions under Cygwin (Windows) - fixed bug met when removing tape marks (-at option) and due to poor compression, dar skips back to re-save file uncompressed leading to self reported bug (due to an error in the sanity check). - fixing error message display by dar when -y option is used with another command (-c/-t/-l/-d/-x/-+/-C) from 2.6.14 to 2.6.15 - fixed error message formatting error leading message to contain garbage in place of system error information. - fixing bug (internal error) met while trying restoring files and dirs without sufficient write permission on the destination directory tree to perform the operation. - adding minor feature to avoid restoring Unix sockets (-au option) - fixing dar-catalogue.dtd from 2.6.13 to 2.6.14 - script used to build dar windows binary has been fixed to have the official default etc/darrc config file usable and used out of the box. - fixed bug met when removing slices of an old backup located on a remote sftp server - fixed bug in cache layer met when writing sliced backup to a remote ftp or sftp repository - enhancement to the -[ and -] options to work as expected when "DOS" formatted text file is provided as a file listing. from 2.6.12 to 2.6.13 - fixed compilation warning in testing routine (outside libdar and dar) - due to change in autoconf, the --sysconfdir path (which defaults to ${prefix}/etc) was read as an empty string, leading dar to look for darrc system file at the root of the filesystem (/darrc) - fixed bug that should occur in extremely rare conditions (it has been discover during 2.7.0 validation process): compression must be used, no ciphering, no hashing, file changed at backup time or had a poor compression ratio, was not saved at slice boundary, the previous entry had an EA saved but no FSA or an unchanged FSA. In such conditions are all met, dar tries to resave the file in place, but partially or totally overwites the EAs of the previous entry leading to archive testing to fail for these EA (though the archive could be finished without error). - fixed bug met when case insensitive mask is requested (-an option) and locale of file to restore or backup is not the one the dar binary is run with. from 2.6.11 to 2.6.12 - fixed regression met in 2.6.11 when generating encrypted archives from 2.6.10 to 2.6.11 - fixing bug in dar_manager libdar part, met when the two oldest entries for a file are recorded as unchanged (differential backup). - fixed typo in dar_manager man page - updated lax mode to ignore encryption flag found in header and trailer - fixed two opposite bugs in strong encryption code that annihilated each other, by chance - fixing bug met when merging an archive an re-compressing the data with another algorithm that gives a less good result, this condition lead the merging operation to fail reporting a CRC mismatch - improving archive header code to cope with unknown flags from 2.6.9 to 2.6.10 - update the configure script to handle some undocumented --enable-* options that existed but were not expected to be used. - fixed spelling in darrc comments - fixed bug in dar_split that could occur in very rare conditions - fixed EA support build failure due to what seems to be a change in Linux kernel header - fixed symbol conflict with s_host of in.h on omniOS platform from 2.6.8 to 2.6.9 - fixed some obvious bug when running doxygen (inlined documentation) - fixing configure.ac to detect xattr.h system header when it is located in /usr/include/sys like under Alpine Linux distro (musl libc) - fixed typo in symbol name "libdar.archive_summary" in python binding - fixed bug met when testing an archive in sequential-read mode leading dar to skip back to test deleted inode which is useless and may lead to failure if the archive is read from a pipe - adding *.zst files as excluded from compression when using the predefined target "compress-exclusion" - fixed text diagram alignment in documentation and spelling errors - moving date_past_N_days script to doc/sample with other scripts from 2.6.7 to 2.6.8 - fixing bug leading binary delta failed to be read from an archive in some quite rare condition. - fixed bug that was not listing file with delta path when filtering out unsaved inodes - updated source package for the python binding tutorial document be installed with the rest of the documentation - adding date_past_N_days helper script to backup only files later than "today minus N days" - incorporated the "args" support built script in dar source package from 2.6.6 to 2.6.7 - fixing shell_interaction_emulator class declaration to avoid compilation errors and warning under MacOS - fixed bug: dar failed creating an archive on its standard output reporting the error message "Skipping backward is not possible on a pipe" - new feature: added python binding to libdar! from 2.6.5 to 2.6.6 - fixing script that builds windows binary packages to include missing cygwin libraries - fixing bug: dar_manager batch command (-@ option) contains a inverted test in a sanity check, leading the execution to systematically abort reporting an internal error message. - fixed message error type when asymmetrical encryption is requested and gpgme has not been activated at compilation time - fixed dar/libdar behavior when gpg binary is not available and gpgme has been activated at compilation time. Instead of aborting, dar now signal the gpgme error and proposes to retry initialization without gpgme support. This makes sense for dar_static binary which stays usable in that context when all options have been activated from 2.6.4 to 2.6.5 - fixed bug: dar crashed when the HOME environment variable was not defined (for example running dar from crontab) - removed useless skip() in cache layer - cache layer improvement, flushing write pending data before asking lower layer about skippability - fixed bug met when several consecutive compressed files were asked to be compressed and failed getting reduced in size by libdar. In that situation as expected, libdar tries to skip backward and stores the file uncompressed. However the cache layer was introducing an offset of a few bytes leading the next file to be written over the end of the previous one, which dar reported as data corruption when testing the archive. - updating licensing information with the new address of the FSF - clarifying message about possibly truncated filename returned by the operating system from 2.6.3 to 2.6.4 - fixed display bug indicating delta signatures were about to be calculated even when this was not the case. - updating dar man page about the fact aes256 replaced blowfish as the default strong encryption algorithm - bug fix: -D option used at creation time was not adding escape mark of skipped directories. This lead the empty directories that would replace each skipped one to be inaccessible and unable to be restored only in sequential read mode (it worked as expected in direct mode) from 2.6.2 to 2.6.3 - feature enhancement: added option to specify the block size used to create delta signatures. - feature enhancement: added the ability to provide login for sftp/ftp remote access, that contain @ and other special characters. - fixed bug in dar_xform, leading dar not finding source archive if destination was not placed in the same directory as source from 2.6.1 to 2.6.2 - fixed incoherence in documentation - updating in-lined help information (-h option) - fixed unexpected behavior of the dar command-line filtering mechanism met when the provided path to -P or -g options was ending with a slash - renaming 'path operator + (std::string)' as method append() to avoid compiler using it when a std::string need first to be converted to path before adding it to an existing path. - adding check test to detect when path::append() is used to add a path instead of a filename to an existing path object. - adding a warning when restoring a Unix socket if the path to that socket is larger than what the sockaddr_un system structure can handle - fixing bug due to Linux system removing file capabilities (stored as EA) when file ownership is changed. Though restoring EA after ownership may lead to the impossibility to restore them due to lack of permission when dar/libdar is not run as root. Thus we try restoring EA a second time after ownership restoration. This is not efficient but restores the file as close as possible to their original state whatever permission dar has been granted for a restoration operation. from 2.6.0 to 2.6.1 - fixed error in man page - fixing bug in the routine removing files for local filesystem, used at archive creation time to remove an existing archive (after user confirmation), or at restoration time used to remove file that had been removed since the archive of reference was done. The file to remove was always removed from the current directory (missing the path part), most of the time this was leading to the error message "Error removing file ...: Not such file or directory". It could also lead to incorrectly removing files (not directory) located in the directory from which dar was run. - fixing bug met while repairing an archive containing delta signature for unsaved files - merging patch from ballsystemlord updating list of file extension not to compress (see compress-exclusion defined in /etc/darrc) - review cat_delta_signature implementation in order to be able to fix memory consumption problem when delta signature are used - fixed missing mark for data CRC when the data is a delta patch, leading sequential reading to fail when a delta patch was encountered - fixed bug in XML output about deleted entries - fixed XML output to be identical to the one of dar 2.5.x for deleted entries. - Adding the deleted date in 'mtime' field for deleted entries in XML output - fixing bug in xz/lzma routine wrongly reporting internal error when corrupted data was met - fixed code for compilation with clang to succeed (it concerns MAC OS X in particular) - fixed inconsistencies in libdar API that avoided gdar to compile with libdar released in 2.6.0 from 2.5.x to 2.6.0 - new feature: support for binary delta in incremental/differential backups (relying on librsync) - new feature: support ftp/sftp to read an archive from a cloud storage. (relying on libcurl) reading is optimized to not transfer a whole slice but only the needed part to proceed to the operation (restoration, listing, and so on) - new feature: support ftp/sftp to write an archive eventually with hash files to a remote cloud storage (relying on libcurl) - modified behavior: While creating a single sliced archive, DUC file is now executed unless user interrupted dar/libdar. This to stay coherent with multi sliced archive behavior - new feature: display filters nature (-vmasks option) - new feature: follow some symlinks as defined by the --ignored-as-symlink option - new feature: one can define the compression algorithm a dar_manager database will use. This choice is only available at database creation using the new dar_manager's -z option. In particular "-z none" can be used to avoid using compression at all - repair mode added to re-create a completed archive (suitable for direct access mode and merging) from an interrupted one due to lack of disk space, power outage or other reasons leading to similar problem. - Dar can now only save metadata inode change without re-saving the whole file if its data has not changed. Dar_manager also handle this by restoring the full backup and then the inode metadata only when necessary. - In regard to previous point, if you want to keep having dar saving the data when only metadata has changed use --modified-data-detection option - moved dar_slave code into libdar as class libdar::libdar_slave - moved dar_xform code into libdar as class libdar::libdar_xform - added libdar_slave and libdar_xform in libdar API - modified dar_xform and dar_slave to rely on new libdar API - API: simplified user_interface class - API: using std::shared_ptr and std::unique_ptr to explicitly show the ownership of the given pointed objects (C++11 standard) - API: simplified class archive to only require user_interaction at object construction time - API: simplified class database to only require user_interaction at object construction time - API: making enum crypto_algo an C++11 "enum class" type - security refresh: default crypto algo is now AES256. As you do not need anymore since 2.5.0 to specify the -K option when reading an archive this should not bring any backward compatibility issue - security refresh: adding salt per archive (one is still present per block inside an archive) - security refresh/new feature: adding option --kdf-param to define the iteration count for key derivation, which now defaults to 200,000 and hash algorithm used to derived key, still using sha1 by default - slide effect of previous feature due to starvation of free letters to add a new command, the -T option with argument is no more available, one need to provide explicitly the desired argument - security refresh: improving seed randomization for the pseudo-random generator used in elastic buffers - feature enhancement: activate needed Linux capabilities in the "effective" set if it is permitted but not effective. This concerns cap_chown at restoration time, cap_fchown for furtive read mode, cap_linux_immutable to restore the immutable flag, and cap_sys_ resource to set some linux FSA. This let one set the capabilities for dar binary only in the "permitted" set, capabilities will then be allowed only for users having them in the "inheritable" set of their calling process (usually a shell), without root privilege need. - the ./configure --enable-mode option now defaults to 64, which will setup a libdar64 in place of infinint based libdar by default. You can still build a infinint based libdar by passing --enable-mode=infinint to the ./configure script. from 2.5.21 to 2.5.22 - removed useless skip() in cache layer - cache layer improvement, flushing write pending data before asking lower layer about skippability - fixed bug met when several consecutive compressed files were asked to be compressed and failed getting reduced in size by libdar. In that situation as expected, libdar tries to skip backward and stores the file uncompressed. However the cache layer was introducing an offset of a few bytes leading the next file to be written over the end of the previous one, which dar reported as data corruption when testing the archive. - updating licensing information with the new address of the FSF - fixing bug met when restoring file having FSA but EA and overwriting an existing file in filesystem - clarifying message about possibly truncated filename returned by the operating system from 2.5.20 to 2.5.21 - bug fix: -D option used at creation time was not adding escape mark of skipped directories. This lead the empty directories that would replace each skipped one to be inaccessible and unable to be restored only in sequential read mode (it worked as expected in direct mode) from 2.5.19 to 2.5.20 - adding a warning when restoring a unix socket if the path to that socket is larger than what the sockaddr_un system structure can handle - fixing bug due to Linux system removing file capabilities (stored as EA) when file ownership is changed. Though restoring EA after ownership may lead to the impossibility to restore them due to lack of permission when dar/libdar is not run as root. Thus we try restoring EA a second time after ownership restoration. This is not efficient but restores the file as close as possible to their original state whatever permission dar has been granted for a restoration operation. - fixing compilation problem with recent clang++ compiler from 2.5.18 to 2.5.19 - fixed compilation issue on system that to not have ENOATTR defined - fixed compilation warning about deprecated dynamic exception specifications in C++11 - fixing bug in xz/lzma routine wrongly reporting internal error when corrupted data was met - fixed compilation warning with gcc about deprecated readdir_r system call from 2.5.17 to 2.5.18 - fixed compilation issue in context where EA are not supported - fixed typo in dar man page (--sequential-mode in place of --sequential-read) - moved the "no EA support warning" trigger when restoring an archive later in the EA restoration process, in order to have the possibility thanks to the -u "*" option to restore an archive containing EA using a dar/libdar without EA support activated at compilation time, - at restoration time, avoiding issuing an "EA are about to be overwritten" warning when the in place file has in fact not only one EA set. from 2.5.16 to 2.5.17 - bug fix: dar failed to restore EA when file permission to restore did not included user write access. Fix consists in temporarily adding user write access in order to restore EA and removing this extra permission afterward if necessary - updated FAQ - fixed typos in dar man page - fixed bug met when writing slices to a read-only filesystem - fixed compilation problem under Solaris - fixed typos in dar man page - bug fix: self reporting bug in filtre.cpp line 2932 or 2925 depending or dar's version (report occurs in a normal but rare condition that was not imagined by developer, leading dar to abort the backup) - bug fix: wrong evaluation of possibility to seek backward in the escape layer (layer managing tape marks) which lead to useless but harlmess skip trials in some rare conditions. from 2.5.15 to 2.5.16 - bug fix: while rechecking sparse file (-ah option) during a merging operation, dar wrongly reported CRC mismatch for saved plain files - fixed man page about sparse-file handling while merging: To remove sparse file datastructure during a merging operation you need to set --sparse-file-min-size to a valuer larger than all file sizes contained in the archive (for example 1E for one exabyte) - bug fix: met when using compression and creating the archive to dar's standard output (ssh) and leading files to be corrupted in the archive and reported as such. - optimisation of escape_sequence skippability (avoids trying skipping and failing for some corner cases, when we can detect it does even not worth trying) from 2.5.14-bis to 2.5.15 - fixing self report bug message met when trying to create an isolated catalogue into a directory that does not exist - adding slice overwriting verification before creating a isolated catalogue, to be coherent with other operations creating an archive (backup and merging) - storage size of compressed files was often wrongly stored in archive (shorter than reality), the only impact took place at archive listing time where the compression ratio displayed was better than reality - fixed auto-detected bug condition triggered when -Tslicing is used with --sequential-read. Both options are not compatible and have been excluded by a nicer message than this auto-detection bug message. from 2.5.14 to 2.5.14-bis - avoiding using the syncfs() system call in dar_split when the platform does not support it (replacing it by sync() in that case for compilation to be successful) from 2.5.13 to 2.5.14 - made libgcrypt built-in memory guard be initialized before obtaining ligcrypt version, to respect libgcrypt usage (but no problem was seen nor reported about this inconsistency) - fixed syntax error in XML listing output (EA_entry and Attributes tags) - fixed typos in dar man page - Updating Tutorial for restoration - fixed bugs in dar_split: cygwin support, filedescriptors were not explicitly closed at end of execution, allocating buffer on heap rather than in the stack for better size flexibility, avoiding buffer size to be greater than SSIZE_MAX. - added -s option to dar_split in order to disable the by default SYNC write that was used and which caused poor performance. To keep the same behavior as the older dar_split (and its poor performances) you need now using -s option. - dar_split enhancement: added call to syncfs before closing the file descriptor in split_output mode - fixed bug in dar_split that was did not lead dar_split to completely fulfill an device before asking for user to change the media when used in split_output mode, this was sometimes leading dar reporting file as corrupted at dar_split at media boundary. - added feature in dar_split to show the amount of data written since the last media change from 2.5.12 to 2.5.13 - added -az option to automatically nullify negative dates returned from the system in the archive under creation (filesystem is not modified) - included the birthtime (HFS FSA) into the negative dates handling - modified behavior: dar now fails upon unknown option instead of warning the option is unknown and thus ignored - bug fix: dar 2.5.12 and below in 2.5.x branch could not read archive generated by dar 2.4.x and below (unless in infinint compilation mode) when the old archive included a file which date(s) was returned by the system as a negative integer at the time of the backup. Note that if dar can now read old archive in that particular case, such date stay recorded in the dar archive as very far in the future and not in the past, because 2.4.x and below blindly assumed the system would always return a positive integer as number of second since 1970. Since 2.5.12 release, when the system provides a negative date the date is assumed as zero (Jan 1970) with user agreement. - fixed missing throw in tools.cpp (exception condition was not reported) from 2.5.11 to 2.5.12 - documenting in man page the limitation of -[ and -] options concerning the maximum line length that could be used in a listing file. This limitation was only described in doc/Limitations.html - dar now aborts if a line exceeding 20479 bytes is met in a listing file - improved error message issued when a file listing (-[ or -] option) is missing for it provides the missing filename in the error message - improved error message issued when a line of a file listing exceeds 20479 characters for it display the start of that line - fixed bug in file listing (-[ option) leading some directories and their content to be excluded in a somehow rare condition - improved behavior when dar reads a negative date. Instead of aborting it now asks the user if it can substitute such value by zero - improved behavior when dar is asked to read an archive located in a directory that does not exist. DUC file passed to -E option is now properly run in that case too and has the possibility for example to create that directory and download requested file from 2.5.10 to 2.5.11 - minor feature: displays the archive header which is never ciphered and aborts. This feature is activated while listing archive content and adding the -aheader option. This brings the side effect to invert two lines in the archive summary (dar -l archive -q) "catalogue size" and "user comment". - adding date format info for -w option in "dar_manager -h" usage help - fixed several mistakes in tools.cpp leading compilation to fail under certain environments - fixed a typo in filesystem.cpp and portability issue that lead compilation to fail under openbsd 6.0 - fixed bug in the filtering mechanism relying on file listing (-[ and -] options) that could not find an entry in the listing upon certain condition leading a file not being excluded as requested or not included as requested from 2.5.9 to 2.5.10 - fixed bug: -r option (only more recent overwriting policy) was considering a file to be more recent when it had the exact same date as the file in place. - updating documentation about requirements for compiling dar from sources - fixed bug: bug met when restoring of a file that has the immutable flag set. Dar/libdar failed restoring such file in the context of differential/incremental backup. The fix consists of the removal of the immutable flag from filesystem before restoring the new version of the file's data, then setting back the immutable flag afterward. - updating FAQ with description of the way dar uses lzo compression compared to the lzop program - fixed bug: aborting an archive was leading to an unreadable archive in direct mode, most of the time when strong encryption was used - minor new feature: added two flavors of lzo algorithm: lzop-1 and lzop-3 in order to match compression levels 1 and 3 of the lzop command from 2.5.8 to 2.5.9 - fixed typos in documentation about dar internal use of symmetric encryption - fixed bug: merging operation could wrongly melt different unrelated hard linked inodes when merging using an archive which results from a previous merging operation. - fixed bug: aborting an archive was sometimes leading to an unreadable archive in direct mode (was readable only in --sequential-read mode) - fixed bug: libgpgme was only present at linking time of final binaries (dar, dar_slave, dar_xform, dar_manager, dar_cp, dar_split), not at linking time of libdar, which caused problem under Linux Rosa distro where the "no-undefined" flag is passed to the linker. - minor new feature: -ay option has been added to display sizes in bytes instead of the default which uses the largest possible unit (Kio, Mio, and so on.) from 2.5.7 to 2.5.8 - fixed double memory release occurring in a particular case of read error - improving robustness of FSA code against data corruption - fixed bug: DAR_DUC_PATH was not used with -F and -~ options - new feature: added -aduc option to combine several -E options using the shell '&&' operator rather than the shell ';' operator. The consequence is that with -aduc option a non zero exist status of any script (and not only of the script given to the last -E option) will lead dar to report the error. - man page updated about combination of several -E options - fixed bug: merging partial FSA led to self reported bug in cat_inode.cpp at line 615 from 2.5.6 to 2.5.7 - fixed bug leading dar to not include directories given to -g option nor to exclude directories given to -P option when at the same time the directory given to -R option starts by a dot ("-R ./here" in place of "-R here") - bug fix and speed improvement: under certain circumstances dar was reading several times the data at slice boundary, leading dar to ask for slice N then N-1 then again N, this caused sub-optimal performance and was triggering user script unnecessarily from 2.5.5 to 2.5.6 - added speed optimization when comparing dates with hourshift flexibility (-H option) - fixed bug met when using as reference an archive generated by dar 2.5.4 or older, bug that lead dar saving almost all file even those that did not change. from 2.5.4 to 2.5.5 - fixed message displayed when reading old archives - fixed bug that avoided dar-2.5.x code to read old archive format when special allocation was set (by defaut) at compilation time - disabling special-alloc by default reducing memory footprint - fixed error in FAQ about the way ctime/atime/mtime are modified during normal operating system life. - new implementation of class datetime with better memory footprint - avoding storing sub-microsecond part of date to preserve limitint ability to store large dates - moving field cat_inode::last_cha from pointer-to-field to plain field of the class, this slightly reduce catalogue memory footprint. - fixing bug in the returned exit status when dar failed executing DUC command due to system error (now returning the expected code 6 in that case too) from 2.5.3 to 2.5.4 - fixing missing included files for libdar API - removed extra try/catch block introduced by commit 72da5cad5e52f959414b3163a2e2a320c2bc721e - removed sanity check that caused problem when writing an archive to a FUSE based filesystem. - fixing non call of the -E script/command after last slice creation, when encryption or slice hashing was used - fixed bug in dar_manager: archive permutation in database lead libdar to check an archive number of range under certain circumstances - fixed inversion of the condition triggering a warning about archive date order in a dar_manager database while moving an archive within a database - fixed typos in documentation - catalogue memory optimization, with the drawback to limit the number of entry in an archive to the max integer supported by the libdar flavor (32 bits/64 bits/infinint). - fix configure script to temporarily rely on LIBS rather LDFLAGS to check for gpgme availability - removed order dependency between -A and -9 options of dar_manager: -9 can now be specified before or after -A option. - resetting to "false" the "inode_wrote" flag of hard link data-structure before testing and merging. Merging a previously tested archive or testing a second time would not include hard linked inode in the operation. This situation does not occurs with dar but could succeed with some external tools that keep the catalogue in memory to perform different operations on it. - fixed bug in the routine that detects existing slices to warn the user and/or avoid overwriting, bug that lead dar to "bark" when an archive base name started by a + character. - avoiding to use AM_PATH_GPGME in configure script when gpgme.m4 is not available - adding new methods in libdar API to obtain the archive offset and storage size of saved files (class list_entry) - adding new method in libdar API to translate archive offset to file offset (class archive) - reporting a specific error message when filename returned by the system has the maximum length supported by the system itself, assuming filename has been truncated from 2.5.2 to 2.5.3 - Fixing a 2.5.x build issue met when a 2.4.x libdar library is already installed in an FreeBSD system. - Improving message and behavior of libdar in lax mod when a truncated archive is read - Fixing self reported bug at "tronconneuse.cpp line 561" met while reading truncated/corrupted archive - Fixed not closed filedescriptors, met when saving a filesystem that has not ExtX FSA available - Fixing configure script to be more robust in front of system where gpgme.h is installed in a non standard path and user did not provide coherent CPPFLAGS, LDFLAGS before calling ./configure - Displaying CRC values when listing isolated catalog as XML output - Fixing compilation issue when system does not provide strerror_r() call - Avoiding warning about FSA absence when fsa-scope is set to "none" - Adding --disable-fadvise option to configure script for those that want back full pressure from dar on the system cache (same behavior as 2.4.x) - Fixing bug, fadvise() called a wrong time making it having no effect - updating FAQ about comparative performance from 2.4.x to 2.5.x - optimization: reduced the number of call to dup() at libdar startup - improvement: printing file type on verbose output - new feature: added %t macro reflecting the inode type in dar's --backup-hook-execute option from 2.5.1 to 2.5.2 - fixed bug met when permission is denied while reading or writing slices - fixing bug that avoided creating an archive at the root of the filesystem - fixing bug met in rare situation while reading in sequential-read mode an archive encrypted using gnupg encryption. In that situation libdar may fail reading the archive (but succeeds in normal read mode) issuing an obscure message (message has also been fixed). - code simplification, removing field reading_verion from class crypto_sym as its parent class tronconneuse already have such information - removed extra newline displayed by dar at end of execution - fixed bug avoiding dar to properly read an entry (reporting CRC error) when specific sequence of character (start of escape sequence) fall at end of the read buffer of the escape layer. - speed optimization for datetime class - fixed bug that avoided dar reading archives in sequential read mode while reading from a pipe - fixed bug in non regression test routine provided beside dar/libdar - fixing display message showing not always in the correct context - fixing case inversion leading the cache layer not to be used when necessary and used when useless while reading an archive - improved heuristic in dar_manager to determine the date a file has been deleted. from 2.5.0 to 2.5.1 - fixed display bug in dar_manager met when using -o option and adding options for dar that does not exist for dar_manager (like -R option) - reactivating disabled (by mistake) optimization for some read-only dar manager database operations - fixing compilation issue with dar against gcc 4.9.2 - fixing syntax error in dar_manager message - fixed bug that forbade dar_manager to write down modified database when only database header was modified (-o, -b, -p switches). - adding dar_manager database format version information with -l option - fixed libdar inability to read dar_manager's database format version 4 - adapting code to build under cygwin environment, where thread_local seems broken - fixed output to stderr in place of stdout for Licensing information - fixed bug met when permission is denied while reading or writing slices - fixing bug that avoided creating an archive at the root of the filesystem from 2.4.x to 2.5.0 - added support for posix_fadvise() - added entrepot class hierarchy to support in the future other storage types than local filesystem for slices - API: added access to the entrepot through the API - modified class hash_fichier for it becomes entrepot independent - API: extended libdar API with an additional and more simple way to read an archive: archive::get_children_in_table() method, see doc/API_tutorial.html for details - added support for extX (see lsattr(1)) and HFS+ (birthtime date) Filesystem Specific Attributes (FSA). - dar is now able to skip backward when a file is found to be "dirty" at backup time. This avoids wasting space in archive but is only possible if the backward position is located in the current slice and no slice hashing nor strong encryption is used. Of course if the archive is written to a pipe or to stdout, skipping back to retry saving data at the same place is neither possible, --retry-on-change option stays possible in that cases at the cost of data duplication (wasted bytes amount, see --retry-on-change man page). - by default dar now performs up to 3 retries but do not allow for wasting bytes if file has changed at the time it was read for backup, this can be modied using --retry-on-change option. - With the same constraints as for a changing file, if a file is saved compressed but its compressed data uses more space than uncompressed, the file's data is resaved as uncompressed. However, if skipping backward is not possible, data is kept compressed. - if system provides it, dar uses "Linux capabilities" to check for the ability to set file ownership when dar is not run as root. This allows dar to restore ownership when allowed even when it is not run as superuser. - removing dar-help tool used to build dar -h messages. That tool became useless for a long time now. - added several more specific verbosity options: -vm, -vf and -vt - added support for microsecond timestamps (atime, mtime, ctime, birthtime) - Using lutime() to restore atime/mtimes of symlink on systems that support it. - API: removed backward compatible API for old libdar 4.4.x - API: simplified implementation of archive isolation thanks to isolation evolution features brought by release 2.4.0. Memory requirement is now devided by two compared to releases of previous branch (2.4.x). - dar has been updated to use this new API for archive isolation - added exclude-by-ea feature to avoid saving inodes that have a particular user defined EA set. - added comparison of an isolated catalogue with a filesystem, relying on embedded data CRC and inode metadata in absence of the saved data. - The new archive format (version 9) holds the ciphering algorithm used at creation time, only the passphrase is now required at reading time and -K option may be ignored which will lead dar to prompt for passphrase. - Adding support for public key encryption (GnuPG) supporting several encryption keys/recipients for a given archive - Adding support for public key signature when public key encryption is used - While listing archive contents, directories now show the size and average compression ratio of the data they contain - Archive summary (-l with -q options) now reports the global compression ratio - added the -vd switch to only display current directory under process for creation, diff, test, extraction and merging operations - added xz/lzma compression support - added -Tslicing listing option to show slice location of files inside an archive archive. - isolated catalogues now keep a record of the slicing layout of their archive of reference in order to provide -Tslicing feature when used on the isolated catalogue alone. - However if an archive has been resliced (using dar_xform) after its isolated catalogue has been generated, using -Tslicing option with the isolated catalogue would give wrong information. To overcome that, it is possible to specify what is the new slicing of the archive of reference by using the -s and -S options in conjunction with -Tslicing - added dar_split command to provide on-fly multi-volume archive support for tape media - experimental feature to have libdar using several threads (not activated by default due to poor performance gain) - dar now aborts when a given user target cannot be found in included file - added sha512 hashing algorithm beside already available md5 and sha1, the generated hash file can be used with 'sha512sum -c ' command - removed useless --jog option for memory management - removed previously deprecated -y/--bzip2 command, bzip2 compression remains available using -z option (-zbzip2 or --compression=bzip2) - replaced SHA1 by SHA224 to generate IV for encryption blocks, this slightly improves randomness of IV and stay available when libgcrypt is run in FIPS mode from 2.4.23 to 2.4.24 - fixed bug: merging operation could wrongly melt different unrelated hard linked inodes when merging using an archive which results from a previous merging operation. from 2.4.22 to 2.4.23 - fixed bug leading dar to not include directories given to -g option nor to exclude directories given to -P option when at the same time the directory given to -R option starts by a dot ("-R ./here" in place of "-R here") from 2.4.21 to 2.4.22 - fixing bug in the returned exit status when dar failed executing DUC command due to system error (now returning the expected code 6 in that case too) from 2.4.20 to 2.4.21 - removed sanity check that caused problem when writing an archive to a FUSE based filesystem. - fixed bug in dar_manager: archive permutation in database lead libdar to check an archive number out of range under certain circumstances - fixed inversion of the condition triggering a warning about archive date order in a dar_manager database while moving an archive within a database - removed order dependency between -A and -9 options of dar_manager: -9 can now be specified before or after -A option. - resetting to "false" the "inode_wrote" flag of hard link datastructure before testing and merging. Merging a previously tested archive or testing a second time would not include hard linked inode in the operation. This situation does not occurs with dar but could succeed with some external tools that keep the catalogue in memory to perform different operations on it. - fixed bug in the routine that detects existing slices to warn the user and/or avoid overwriting, bug that lead dar to "bark" when an archive base name started by a + character. from 2.4.19 to 2.4.20 - fixed display bug in dar_manager met when using -o option and adding options for dar that does not exist for dar_manager (like -R option) - reactivating disabled (by mistake) optimization for some read-only dar manager database operations - fixing compilation issue with dar against gcc 4.9.2 - fixing syntax error in dar_manager message - fixing bug that avoided creating an archive at the root of the filesystem from 2.4.18 to 2.4.19 - fixed missing quote in dar_par.dcf which is called by the par2 directive - fixed bug in dar_manager's -u option, not displaying most recent files of an archive when they have been marked as removed in a more recent archive of the same dar_manager database. - fixed bug met while restoring in sequential read mode a file having several copies (was modified at the time it was saved and retry-on-change was set). from 2.4.17 to 2.4.18 - Initial Vector used for strong encryption was set with pseudo-random data generated using SHA1 message digest and blowfish cipher, which are not available when ligcrypt is running in FIPS mode. Since 2.4.18 we now use SHA256 and AES256 for IV assignment in order to have libdar compatible with FIPS mode. For data encryption nothing changes: the cipher specified (-K, -J, -$ options on CLI) are used as before. - fixing bug met when performing archive isolation in sequential-read mode, If an archive corruption or truncated archive leads an inode to not have its CRC readable, dar aborts and issues a BUG report. - updating list of project relying on dar/libdar from 2.4.16 to 2.4.17 - fixing issue when case insensitive comparison was requested and invalid wide char for the current local was met in a filename. In such situation the corresponding file was never saved before (considering a filesystem error for that file), while now the ASCII case insensitivity is used as fallback. from 2.4.15 to 2.4.16 - fixing archive listing displayed information for catalogue size when archive is read in --sequential-read mode - fixing bug that avoided dar releases 2.4.x up to 2.4.15 to read encrypted archive generated by dar release 2.3.x and below - adding informational note at the end of ./configure script execution when --enable-mode has not been used. - adding support for case sensitivity in filename comparison (-an option) for other character sets than POSIX/C locale like Cyrillic for example. - fixing bashisms in doc/samples scripts from 2.4.14 to 2.4.15 - fixing bug met when reading an encrypted archive in sequential mode - fixing bug met when reading an encrypted archive in sequential mode from an anonymous pipe - changed option '-;' to -9 as '-;' does not work on all systems with getopt (only long option equivalent --min-digits worked) for dar, dar_cp, dar_manager, dar_xform and dar_slave commands. - fixing bug met when restoring deleted files in sequential read mode and some directory where they should be "restored" are not readable or could not be restored earlier - adding extra buffer to handle sequential read of encrypted archive when the last crypto block contains some but not all clear data after encrypted one (the archive trailer). - fixing compilation issue using clang - fixing bug that prevents using -~ option with on-fly catalogue isolation in order to execute an user command once on-fly isolation has completed - added some autoconf magic to determine the correct (BSD/GNU) flag to use with sed in order to activate regular expression parsing - new implementation of mask_list class which is compatible with libc++ - fixed bug met on FreeBSD with dar_xform where the system provides a standard input file descriptor in read-write instead of read-only mode. from 2.4.13 to 2.4.14 - limiting memory consumption of the cache layer to stay below 10 MiB, under certain circumstances (very large archive), it could grow up to an insane value like 50% or the available RAM. reducing to 10 MiB does not impact performance in a noticeable manner while it avoids system to swap out due to the libdar cache layer becoming huge. - added --with-pkgconfigdir to define an alternative path for libdar pkgconfig file (to ease portability to FreeBSD) - modified some Makefile.am for better FreeBSD support - fixed display bug in XML listing output concerning hard linked inodes - fixing typo in man page - fixing bug met while isolating a catalogue in --sequential-read mode. Using such isolated catalogue lead dar report an error about inaccessible EA. - displaying compression rate for sparse files even when they are uncompressed, sparse file detection also leads to compress files - fixing bug that lead libdar to fail comparing an inode having EA when comparison is done in --sequential-read mode - fixing display bug in in ligcrypt check of configure script for minimum required version - fixing 'make clean' to remove some forgotten files generated by 'make' from 2.4.12 to 2.4.13 - adding initialization value for two variables to avoid inappropriate warning when compiling with -Wall option - reducing UNIX_PATH_MAX by the system when not defined from 108 to 104 bytes to accommodate BSD systems - fixing assignment operator of class criterium that was not returning any value as it should - removing useless boolean expression that always succeeds in logical AND expression - adding support for back-slash of quoting characters in DCF files - fixed compilation issues with clang / FreeBSD, Thanks to Neil Darlow's server ;-) - fixed compilation warning due to deprecated symbols in libgcrypt header files - replaced gnu make specific rules by legacy ones to avoid automake warning about them - removed old unused stuff from misc sub-directory - adding warning at compilation time if libgcrypt used is older than 1.6.0 - adding warning at execution time if hash computation is requested with slices greater than 256 Gio and ligbcrypt dynamically or statically linked is older than 1.6.0 - adding alternative methods in list_entry API class to return dates as number of seconds - fixed bug in hour-shift (-H option) when comparing dates from an old extracted catalogue (archive format 7 or older). - fixed documentation bug about the meaning of the compression ratio - fixed a display bug about the "compression flag" wrongly displayed for uncompressed files - fixed unhandled exception when giving non number argument to -1 option from 2.4.11 to 2.4.12 - for correctness fixed delete vs delete[] on vector of char (not incidence reported) - fixed out of range access in routine used to read very old archive format - fixed error in logical expression leading a sanity test to be useless - removed duplicated variable assignment - updated FAQ - fixed typo and spelling errors - fixed bug (reported by Torsten Bronger) in the escape layer leading libdar to wrongly reporting a file as corrupted at reading time - fixed bug in the sparse file detection mechanism that lead the minimum size hole detection to become a multiple of the default value or specified one. This implied a less efficient reduction of sparse files because smaller holes in files were ignored - fixed and updated man page about --go-into option - updated full-from-diff target in /etc/darrc default file - added a debug option in hash_file class (option only used from testing tools) to troubleshoot sha1/md5 hash problem on slices larger than (2**38)+63 bytes, bug reported by Mike Lenzen and understood by Yuriy Kaminskiy at libgcrypt. Note: This bug is still open due to an integer overflow in libgcrypt. - backported from current development code an additional and more simple way to read an archive using the libdar API. This API extension is not used by dar command-line tools for now. - Fixing installation of libdar header files on Darwin, where "DARwin" macros were not filtered out from the generated libdar header files. - Fixing self reported bug 'generic_file.cpp line 309' met while comparing an archive with a filesystem - Update code in order to compile with gcc-4.8.2 in g++11 mode (partial implementation and adaptation of Fabian Stanke's patch) - Fixing bug met while performing a verbose archive listing in sequential read mode - Added Ryan Schmidt's Patch to properly display status at end of ./configure script under BSD systems (in particular Mac OS X) - Updating configure.ac script to fix warning reported by autoconf when generating the ./configure script - Addressed portability problem with BSD systems that do not provide a -d option to the 'cp' command, preventing proper installation of the Doxygen documentation. Fix based on patch provided by Jan Gosmann. from 2.4.10 to 2.4.11 - Modified behavior of 'dar -h' and 'dar -V', both now return 0 as exist status instead of 1 (which means syntax error). - Fixed bug: -Q is now available with -V under the collapsed form -QV or -VQ - fixed typo in documentation - fixed memory leakage met when dar fails a merging operation because the resulting archive is specified in an directory that does not exist. - fixed bug met when isolating a differential backup in sequential read mode - fixed bug about slice file permission not taking care about umask variable when the --hash feature is used. - fixed performance issue when reading an archive over a pair of piles using dar_slave (possibly over ssh) when the archive makes use of escape marks and when no encryption is used - added target "full-from-diff" in /etc/darrc default file - fixed bug avoiding reading an truncated archive in direct access mode with the help of an external catalogue. - new and better implementation of archive extraction in sequential read mode - fixing bug (segfault) met when hitting CTRL-C while reading an archive in sequential mode - fixing libdar.pc for pkg-config for the cflags given to external applications - fixed memory allocation/desallocation mismatches (delete vs delete [] ) concerning four vector of chars. - fixed error in logical expression leading a sanity test to be useless from 2.4.9 to to 2.4.10 - fixing libdar about dar_manager database corruption that occurred when deleting the first archive of a base containing a plain file only existing in that first archive. - Added code to cleanup databases instead of aborting and reporting that previously described type of database corruption. - Added feature when comparing archive with filesystem in order to report the offset of the first difference found in a file. This was necessary to help solving the following bug: - fixed bug in sparse file detection mechanism that could lead in some very particular (and rare) situations to the loss of one byte from file being saved. In that case testing the archive reported a CRC error for that file. So if you keep testing achives in your backup process and have not detect any problem, you can then keep relying on your old backups. This bug also expressed when merging archives: dar aborted and reported that a merged file had a different CRC than the one stored in the archive of reference. from 2.4.8 to 2.4.9 - fixed bug: during differential backup dar saved unchanged hard linked inode when a hard link on that inode was out of the -R root directory. This also has the effect to always save files with long names on NTFS filesystems (!) - Adapted patch provided by Kevin Wormington (new messages displayed) - Fixed syntax error in configure script about execinfo detection - Removed unused AM_ICONV macro from configure script - fixed bug met under Cygwin when auxiliary test command failed to link when libgcrypt was not available. - updated mini-howto by Grzegorz Adam Hankiewicz - updating French message translations - restricted security warning for plain files and hard linked plain files - fixed display bug in dar_cp when manipulating files larger than 2 GB - fixed SEGFAULT met when adding to a dar_manager database an archive which base name is an empty string - improved error message, reporting the -B included file in which a syntax error has been met - modified dar_manager database to consider both ctime and mtime as timestamp value for data of saved files. This suppresses the warning about badly ordered archives in database when at some files have been restores from a old backup. from 2.4.7 to 2.4.8 - documentation fixes and updates - improved database listing efficiency - reduced memory usage of the caching layer in libdar - fixed self reported bug caused by memory allocation failure - fixed a SIGSEGV caused by double free in dar_xform when syntax error is met on command-line - dar_xform was not able to properly transform archive generated by dar older than release 2.4.0 - fixed bug that lead dar be unable to remove a directory at restoration time - replaced old remaining "bcopy" occurrence by a call to memcpy - fixed compilation warning under ArchLinux - fixed crash met while creating a backup with on-fly isolation - fixed libdar behavior when reading a strongly corrupted encrypted archive from 2.4.6 to 2.4.7 - fixing memory allocation bug in crc class, that lead glibc aborting dar - reviewed code and replaced some remaining occurences of bzero/bcopy by their recommended replacement version - fixed compilation problem under Solaris - fixed bug that could lead a file to be wrongly reported as different from the one on filesystem, when that file has been changed while it was saved, then saved a second time but has its size modified since the first time it was saved. from 2.4.5 to 2.4.6 - fixed bug met while interrupting compressed archive creation, the resulting archive was only readable in --sequential-read mode - fixed bug met while reading an interrupted archive in sequential reading mode. It lead dar to not release some objects from memory at the end of the operation, which displayed an ugly error message from libdar selfcheck routine. - fixed message reporting unknown system group when converting gid to name (was reporting unknow "user" instead of unknown "group") - removing the $Id:$ macro from file as we moved from CVS to GIT - updating package to distribute Patrick Nagel's scripts and documentation - updated URL pointing to Patrick Nagel's web site - updating documentation describing how to get source code from GIT (no more from CVS) - fixed typo in configure.ac - added info on how to build a brand-new dar tarball from source in GIT - modifies the end of messages shown by -h option to point to man page for more _options_ rather than _details_ - replaced − in the HTML generated documentation by a standard ASCII dash - fixed alignement bug in CRC calculation that lead libdar based application to crash on sparc-based systems. from 2.4.4 to 2.4.5 - updated sample scripts to be compatible with dar's --min-digit option - added missing included file to be able to compile with gcc-4.7.0 - removing an unused variable in filtre.cpp - fixed a display bug when comparing archive with filesystem, leading to a segmentation fault (%S in place of %i in mask) - fixed bug leading dar to not restore some directories from differential backups when they are absent in the filesystem - fixed bug that show a "uncaught exception" message at the end of archive listing for dar shared binaries only, compiled in infinint mode, under ArchLinux - updated the configure script to link with libexecinfo when available - added possibility to disable the use of execinfo in libdar thanks to the new --disable-execinfo option for the ./configure script - added Andreas Wolff patch to fix bug under Cygwin (segfault on program termination). from 2.4.3 to 2.4.4 - fixed man pages in the NAME section: added whatis entry - fixed segfault: in the internal error reporting code (delete[] in place of free()) - fixed bug: dar_manager was not able to read properly the latest generated databases version when having Extended Attributes recorded for some files - avoided reporting unreleased memory block when compilation optimization have been used (dar, dar_manager, dar_cp, dar_slave, dar_xform do all reported unreleased memory when gcc optimization was used in "infinint" mode) from 2.4.2 to 2.4.3 - fixed absurd compilation warning about possibly uninitialized variable - added -ai switch to dar_manager to disable warning about improper file order in database. - fixed bug met while changing order of archives in a dar_manager database - avoiding concurrent use of -p and -Q options, error message shown in that situation. - modified slice overwriting detection code to use a single atomic system call to create a new slice - replaced delete by delete[] for conversion routine of user/group to uid/gid - added the possibility to disable speed optimization for large directories - added memory troubleshooting option --enable-debug-memory - simplified class CRC implementation - fixed failed memory release upon exception thrown in class deci - modified tlv, tlv_list classes and ea_filesystem routines to not require any corresponding temporary objects in libdar (saves a few new/delete calls) - fixed silent bug in tlv class: due to the absence of copy constructor and destructor, some memory was not released and referred after the corresponding object's destruction - modified generic_file class to avoid temporary crc objects - fixed bug in header class that lead unreleased field (this class lacked a destructor), memory impact was however little: 10 bytes per slice - fixing bug in class tlv: unreleased memory - added protection code in class deci to properly release memory against exception thrown from called routines when user interrupts the operation. - replace previous internal stack report code by backtrace()/backtrace_symbols() - complete change of the implementation of the 'special-alloc' feature: the old code eat too much memory not to be adapted to new features added in release 2.4.0. This new implementation also bring some speed improvement from 2.4.1 to 2.4.2 - fixing bug met when reading an archive in sequential-read mode - fixing bug while filtering in sequential-read mode - fixing backward compatibility in dar_manager with old archives (wrong dates for deleted files). - fixing compilation problem on certain systems (missing #include statement) - fixing documentation syntax and spelling from 2.4.0 to 2.4.1 - adding information about "Cache Directory Tagging Standard" in doc/Feature.html - fixing typo in doc/presentation.html - fixing incomplete information in doc/usage_notes.html - rewriting sample scripts from tcsh to bash in doc/usage_notes.html - updating Swedish translation with the last version from Peter Landgren which has been forgotten for 2.4.0, sorry. - fixing installation problem, where src/libdar/nls_swap.hpp was not installed - fixing version returned by libdar_4_4::get_version to let kdar (or other external program relying on the backward compatible API) working as expected - fixed bug in the code determining whether a directory is a subdirectory of another. This bug could lead dar to restore more files that just the one that were specified with -g option. - added -k option to dar_manager for backward compatible behavior of dar_manager - fixed bug in dar_manager, was recording the wrong date of EA removal (when an inode has dropped all its EA since the archive of reference was done). - adapted dar_par_test.duc sample script to dar-2.4.x new behavior - adapted libdar to MacOS X to restore mtime date after EA, as on this system, modifying some system specific EA implies updating the mtime. But dar cannot yet store and restore the "creation date", it needs specific MacOS X code, as this value is not available through Posix EA. - fixed backward compatibility bug where dar 2.4.0 was not able to read archive containing only a catalogue (differential backup when no change occurred, snapshot backup, extracted catalogue) generated by dar 2.3.x or older. - fixed self reported internal error met when dar is merging archives generated by dar 2.3.x versions. from 2.3.x to 2.4.0 - hard links support for pipes, soft links, char and block devices has been added (so far, only hard links on plain files were supported) - added rich overwriting feature for merging archives (-/ option) - changed default behavior of dar: it no more tries to preserve the atime of read files, which had as side effect to modify the ctime. See man page for -aa and -ac options for details - simplified the use of the "sed" command in Makefiles.am files - integrated Wiebe Cazemier's patch for man page - -E option has been extended to work also when generating a single sliced archive (no need to have -s option to be able to use -E option). - slice header has been extended to store additional information (slice layout is now redundant in each each slice and may be used as backup from a slice to another in case of corruption). - dar does no more need to read the first then the last slice of an archive to get its contents, it now instead only needs the last slice. - an isolated catalogue can now be used as backup of the original archive's internal catalogue (-A option in conjunction with -x option for example) - added directory look-up optimization (adaptation of Erik Wasser's patch) - added -e option support (aka dry-run) to archive testing - added the possibility to set permission and ownership of generated slices - re-designed the libdar API to have all optional parameters carried by class object in a single argument, the aim to not break backward compatibility of the API upon each new feature addition. The libdar_4_4 namespace can be used for backward compatibility with older applications (see API documentation) - added retry on change feature (-_ option). - changed storage for UID and GID from U_16 to infinint to support arbitrarily larger UID and GID - added lzo compression support - dar_manager now uses an anonymous pipe to send configuration to dar, this solves the problem due to command-line limitation. - dar now stores a "removal date" when a file disappeared since the archive of reference was done (so far only the information that a file was removed was stored). This is needed for dar_manager (see next new feature) - dar_manager can now better restore the status of a set of files exactly as it was at any given time from a set of full and differential backups. In particular, it does no more restore files that were removed at the requested date. - added check in dar_manager to detect conditions where a file has a modification date that has been set to the past. Two objectives are at the root of this feature: proper restoration of files and detection of possible rootkit - added mode for restoration that avoid restoring directory tree which do not contain any saved files (in particular when restoring a differential backup) see man page for -D option for more details. - reviewed implementation of code managing Extended Attributes (much faster now) - added batch feature (-@ option) to dar_manager - added Furtive Read Mode support (O_NOATIME + fdopendir): when the system supports it, while reading data, dar does not modify any date (ctime or atime) - added the possibility to have sequential reading of archives (ala tar) see option --sequential-read - added the possibility to read from a pipe (single pipe, without dar_slave) (use '-' as filename in conjunction with --sequential-read) - added -P -g -[ and -] options to archive listing (-l option) - added sparse file detection mechanism (can save and restore sparse files) - added dirty flag in archive for file that changed while being saved. By default a warning is issued when the user is about to restore a dirty file, this can be changed thanks to the --dirty-behavior option - -R option can receive an arbitrary string (still is excepted an empty string) In particular dar will no more complain if the given path contains // or \\ however it must in a way or another point to something that exists! - added a short listing feature (listing only the summary), (using both -l and -q options) - extended conditional statements in included files (DCF) with user defined targets (see user target paragraph at the end of dar man page) User targets let the user add a set of options using a single keyword on command-line. - a sample /etc/darrc is now proposed with some user targets for common operation like compression without compressing already compressed files. - dar now releases filedescriptors of archive of reference(s) before proceeding to the operation (differential backup, archive isolation, etc.) - user can add a comment in archive header some macro are provided for common options (see --user-comment in man page). This comment can be seen when listing an archive in verbose mode (-l -v) or when displaying the archive's summary (-l -v -q). - added a "security warning" feature if ctime has changed in filesystem while inode has not changed at all (-asecu disables this feature). This is to target possible rootkit files. Note that this may toggle false positive, if for example you change EA of a file. - added feature: DAR_DUC_PATH environment variable, which let dar look for a DUC file (see -E and -F options) in the given path. - added feature: DAR_DCF_PATH environment variable, same as previously but for DCF files (see -B option). - added two targets for conditional syntax: "reference:" and "auxiliary:" - weak blowfish implementation has been removed (no backward compatibility as it suffered of a weak Initial Vector (IV) initialization), but the normal blowfish encryption stays in place. - Due to openssh licensing, replaced openssh by libgcrypt dependancy (which stays optional). - added new cyphers aes256, twofish256, serpent256 and camellia256 - added the hash feature (--hash option), supporting md5 and sha1 algorithms. The hash is calculated on the fly for each slice, before its data is even written to disk. This let one to check for media corruption even before a multi-sliced archive is finished. However this does not prevent an archive to be corrupted due to a software bug (in dar, libdar or in a external library), so it is always recommended to test the archive using dar's -t option. - -G option (on-fly isolation) has been replaced by -@ when creating an archive, to reduce the number of letter used for options. This also let available the usual switches associated to -@ option to define an encryption algorithm and passphrase for the on-fly isolated catalogue. - slices number may be padded with zeros (--min-digits option) Note that if using this option when creating an archive, this same option is required for any operation on this archive - added -konly feature to only remove files recorded as suppressed at differential backup restoration time. - dar and libdar now store keys in secure memory (with the exception that a DCF is parsed in unlocked memory, having a key in a DCF file is not as secure as having dar asking for password at execution time using the "-K :" syntax) - added hook for backup: a user command or script can be run before saving and after backing up files that match a given mask all along the backup process (see -<, -> and -= options). - added feature: -alist-ea let the user see the Extended Attributes of files while listing an archive contents. - dar_manager can receive negative numbers to point to archive counting by the end of the database. - dar and libdar stay released under GPL 2.1 (not under GPL 3, and not lesser GPL, neither) - setting the "little/big endian" to usual meaning (it was inverted in the code) this does not change the dar's behavior nor its compatibility with different systems or older libdar versions. - added -ai option to avoid warning for unknown inode types - added support for Solaris's Door files - added feature: decremental backup from 2.3.11 to 2.3.12 - avoiding concurrent use of -p and -Q options, error message shown in that situation. version 2.3.10 to 2.3.11 - fixed bug in the detection code of an existing archive of the same name when creating a new archive (improperly considered some files sharing a part of the archive basename as old slices of an archive of the same base name). - fixed a display bug. When using -v to see which arguments get passed to dar by mean of configuration file (DCF file, ~/.darrc or /etc/darrc) the last argument found in file was not displayed. - fixed two bugs (one in decompression routine, the other in decryption routine) that lead dar to segfault or run into an endless loop when reading a very corrupted archive. - added -H option with -d option - fixed bug leading Dar to report some files to be removed at restoration time to be of different type than the expected one when the reference used for that archive (difference backup) was an extracted catalogue. - fixed bug in dar's command_line parsing leading dar to free twice the same block of memory when an argument containing a double slash was given to -G [SF 3162716]. - probable fix (problem difficult to reproduce) for double memory release in the storage class [SF 3163389] version 2.3.9 to 2.3.10 - added patch by Jan-Pascal van Best to have -[ and -] options working with archive merging - fixed bug in displaying dates [SF 2922417] - enhanced pseudo random number generation used in dar - added an error message when an include/exclude file listing does not contains an invalid path (instead of a self reported bug message). - modified message displayed when some slice of an old archive having the same name are present in the destination directory (backup, isolation, merging, dar_xform) from 2.3.8 to 2.3.9 - fixed bashism in doc/examples/pause_every_n_slice.duc sample script [SF 2020090] - added Jason Lewis's script "dar_backups.sh" which is an enhanced version of n the script done by Roi Rodriguez Mendez & Mauro Silvosa Rivera. - added message asking software upgrade to handle case when new archive format (used by dar >= 2.4.0) is provided to dar - very little optimization of the reading process of EA - updated FAQ - replaced "Catalogue" by "Archive Contents" in output message (-l -v). - added Sergey Feo's patch to dar_par.dcf - added check against stddef.h header file presence in configure script - fixed spelling - added Charles's Script in doc/sample - added -q option to dar - added licensing exception to allow distribution of dar beside OpenSSL library - Bug fix: during archive diff (only), dar restore atime of file in the backup instead of file in the system before opening it for reading. - tested dar with valgrind from 2.3.7 to 2.3.8 - fixed bug in libdar met when user supply an empty file as a list of file to include or exclude ( -[ and -] options ) - fixed bug concerning elastic buffers used beside strong encryption. No security issue here, just in some almost rare situations the generated archive was not readable (testing your archive prevents you loosing data in this situation) - added some speed optimizations - avoided warning to appear without -v option set, when an error is met while fetching value of nodump flag (flag not supported on filesystem for example). from 2.3.6 to 2.3.7 - fixed bug in dar_manager about the localization of the archive in which to find the latest EA - fixed bug in configure script to properly report full blowfish encryption support - fixed a bug in the statistics calculus of dar_manager for most recent files per archive - removed inappropriate internal error check - added --disable-libdl-linking option - fixed mistake in API tutorial - updated Swedish translation by Peter Landgren - fixed bug in the file filtering based on listing file ( -[ option ) - fixed typo and spelling errors in documentation - updated code for clean compilation with gcc-4.2.3 - updated code for clean compilation with gcc-4.3 20080208 (experimental gcc) from 2.3.5 to 2.3.6 - fixed Makefile.am in src/dar_suite (removed "/" after $(DESTDIR)) - fixed bug in regex mask building when not using ordered masks - fixing bug that led dar_manager to report no error while some files failed to be restored due to command-line for dar being too large. - fixed bug met when user aborts operation while dar is finalizing archive creation [SF #1800507] - fixed problem with execvp when dar_manager launches dar from 2.3.4 to 2.3.5 - changed displayed message when adding a hard link to an archive while performing a differential backup - added back the possibility to use old blowfish implementation (bfw cipher) - integrated optimization patch from Sonni Norlov - updated Swedish translation by Peter Landgren - updated French translation - fixed broken Native Language Support in 2.3.x (where x<5) from 2.3.3 to 2.3.4 - fixed behavior when differential backup is interrupted (no more store file that would have been read if no interruption had been done as "deleted" since the archive of reference) [SF #1669091]. - added official method to access catalogue's statistics through the API (for kdar next version). - Fixed syntax error in dar_par_create.duc and dar_par_test.duc files (Parchive integration with dar). - minor spelling fix in error message (compressor.cpp) - added Wiebe Cazemier's two patches for dar man page - integrated patch from Dwayne C. Litzenberger to fix weakness in dar's implementation of the blowfish encryption. - improved the returned message when an invalid path is given as argument - updated doc/sample/sample1.txt script file from 2.3.2 to 2.3.3 - avoid using getpwuid() and getgrgid() for static linking. - fixed typo in dar's man page - update FAQ - fixed bug: uncaught exception thrown when CTRC-C was hit while dar waits an answer from the user [SF #1612205] - fixed bug: unusable archive generated when CTRC-C was hit and blowfish encryption used [SF #1632273] - added a check to verify that the libdar used is compatible with the current dar suite programs [SF #1587643] - fixed bug: added workaround for the right arithmetic shift operator (the binary produced by gcc-3.4.2 produces computes "v>>s" equal to "v" when when v is a integer field composed of s exactly bits. It should rather compute it to zero...). this problem leads 32 bits generated archive incompatible with 64 bits generated archive only when blowfish is used. - fixed bug met when the inode space is exhausted, thanks to "Jo - Ex-Bart" for this new feedback. [SF #1632738] - replaced &, <, >, ' and " in XML listing by &...; corresponding sequence. [SF #1597403] - dar_manager can receive arguments after stick to -o options (it is an error in regard to documentation, but no warning was issued in that case, leading to confusion for some users) [SF #1598138] - updated Veysel Ozer's automatic_backup script - fixed hard link detection problem [SF #1667400] - verbose output did not displayed hard links information - merged patch on dar_cp by Andrea Palazzi to have it to return EXIT_DATA_ERROR when some data have been reported [SF #1622913] from 2.3.2 to 2.3.3 - avoid using getpwuid() and getgrgid() for static linking. - fixed typo in dar's man page - update FAQ from 2.3.1 to 2.3.2 - fixed bug in Native Language Support when --enable-locale-dir was not set (Thomas Jacob's patch) - updated Swedish translation by Peter Landgren - --verbose=skipped was not available (only the short -vs form was available) - reviewed regex with ordered mask for the feature to better fits user's need (Dave Vasilevsky's feedback) - fixed bug where compression algorithm was changed to maximum (fixed with Richard Fish's adequate patch) - fixed tutorial with command line evolution (dar's -g option in particular) - latest version of Grzegorz Adam Hankiewicz's mini-howto - fixed bug concerning restoration of only more recent files from 2.3.0 to 2.3.1 - set back Nick Alcock's patch which has been dropped from 2.2.x to 2.3.x (patch name is "Do not moan about every single file on a non-ext2 filesystem") - fixed compilation problem when thread-safe code is disabled - integrated Wiebe Cazemier's patch for dar's man page - fixed bug in listing: -as option also listed files that had EA even when these were not saved in the archive - file permission of installed sample scripts lacked the executable bit - fixed a bug that appeared when a file is removed while at the time it is saved by dar - avoid having an unnecessary warning appearing when restoring a file in a directory that has default EA set - Cygwin has changed and does not support anymore the path in the form "c:/some/where", you have to use "/cygdrive/c/some/where" instead. Documentation has been updated in consequence. from 2.2.x to 2.3.0 - added user_interaction::pause2() method - added the snapshot feature - added the Cache Directory Tagging detection feature - adapted Wesley's patch for a pkgconfig for libdar - added -[ and -] options (file selection from file listing) Important consequence for libdar user programs: the fs_root argument is now expanded to full absolute path inside libdar, thus the mask you will give for path inclusion/exclusion (the "subtree" argument)will be used against full absolute path of files under consideration for the operation. Assuming you have fs_root=tmp/A and the current directory is /tmp, your mask will be used against strings like /var/tmp/A/some/file. (instead of tmp/A/some/file as in the previous API version). Things are equal if the fs_root is given an absolute path. - changed archive format to "05". Due to complete review of EA management. - upon some signal reception, dar aborts the backup nicely, producing a completely formatted archive will all the file saved so far. This archive can be take as reference for a further backup to continue the operation at a later time. - dar_manager aborts properly upon certain signal reception (do not let the database partially updated). - dar_slave and dar_xform now recognize when a slicename is given in place of a basename - reviewed thread_cancellation (API change) for it be possible to cancel several thread at the same time - prevent some dead-lock situation that can occur when a signal is received inside a critical section - dar_cp, dar_xform and dar_slave also abort nicely upon signal reception - dar_manager can now restore files based on a given date (not always the most recent version) - dar_manager now has an interactive mode (-i option) - change in API, the warning() method need not be overwritten, but the new protected method inherited_warning() must be inherited in its place (same function, same prototype as the original warning() method). - dar_manager features are now part of libdar. API has been completed with these new features - added the "last_slice" context (%c with -E option) when creating an archive - dar now check a file has not been modified while it was reading it, if so it reports a warning and returns a specific exit code - remove included gettext from the package (it is more a source of conflict with external gettext and if somebody needs internationalization better is to install libintl/gettext on its own). - added George Foot feedback about the good backup practice sort guide. - added -e option to dar_manager - added the progressive_report feature in the API - dar can now pause every N slice where N >= 1 - integrated Dave Vasilevsky's patch to support Extended Attributes and file forks under MacOS X - added method in API for external program be able to list dar_manager databases, their file contents and the statistics - added the merge/sub-archive feature - remove [list of path] from command line (-g option is now mandatory) - added regex expression in filters (-ar/-ag options) - added -ak option - added the --comparison-field option (extension of the --ignore-owner option aka -O option) - added the -af option (backup files more recent than a given date, others are keept as already saved) - dar now take care that an escape character can be sent when pressing the arrow keys and avoid considering them in this situation - dar will no refuse to abort if user presses escape when dar asks the user to be ready to write to a new slice - adapted Wesley Legette's patch for an xml archive listing - added 'InRef' status for EA (same meaning as the one for file's data) from 2.2.6 to 2.2.7 - updated Swedish translation by Peter Landgren - fixed bug #37 - added old function (modified in 2.2.6) for backward compatibility - added German translation by Markus Kamp from 2.2.5 to 2.2.6 - fixed bug #36 - avoid removing slices when creating archive in dry-run mode (-e option) - fixed display problem in dar_cp that lead uncaught exception just before exiting from 2.2.4 to 2.2.5 - limited size of internal buffers allocated on the stack to not be greater than SSIZE_MAX when this macro is defined. This comes from feedback from "Steffe" at sourceforge after he ported dar to HPnonStop. - integrated Andrey Yasniy's patch: fixed differential backup problem with ru_RU.koi8-r locale. - integrated Nick Alcock's patch: no warning shown when not on EXT2 filesystem and nodump feature has been activated. - avoid having arrow key be interpreted as escape key (while they remains an escape key + one char, as received from the tty). - added part of Kyler Klein's patch for OSX (Tiger) (only concerns included gettext) from 2.2.3 to 2.2.4 - fixed #35 - added in doc/samples the backup Script of Rodriguez Mendez & Mauro Silvosa Rivera - updated Swedish translation by Peter Landgren from 2.2.2 to 2.2.3 - error in TUTORIAL (-P only receives relative paths) - updated FAQ with memory requirement questions/problem - added Bob Barry's script for determining memory requirement - added documentation about NLS in doc/NOTES - fixed bug concerning the << operator of infinint class. This has no impact as this operator is not used in dar/libdar. - added Jakub Holy's script to doc/samples - fixed bug with patch transmitted from Debian (Brian May) about the detection of the ext2_fs.h header. - added warning in libdar when user asks the nodump flags to be checked against while the nodump feature has not been activated at compilation time. - fixed dar man page about --gzip option usage when using an argument - now counting as errors the file with too long filename - now counting the file excluded due to nodump flag as ignored due filter selection from 2.2.1 to 2.2.2 - fixed typo in dar man page (flowfish ;-) ) - -Q option now forces the non terminal mode even when dar is run from a terminal (tty) this makes dar possible to run in background without having the shell stopping it upon text output. - removed unused control code for dar's command line syntax - spelling fix of the tutorial by Ralph Slooten - added the pertinent part of the memory leak patch from Wesley Leggette (there is no bug here as the function where could occur the memory leak is not used in dar (!) ). - updated FAQ - updated man page information about optional argument's syntax to options like -z9 or --zip 9 - avoid calls to textdomain() when --disable-nls is set - updates doc/NOTES - fixed potential memory leakage on some system (to a "new[]" was corresponding a "delete" in place of a "delete[]" (Wesley's patch)) In consequences, for API users, note that the following calls - tools_str2charptr - tools_extract_basename - libdar_str2charptr_noexcept all return a char * pointer which area must be released by the caller using the delete[] operator - partially integrated Wesley's api_tutorial patch (added explanations) - Fixed installation problem of header files, problem reported by Juergen Menden - updated the examples programs for they properly initialize libdar - the gettext.h file was not installed with libdar headers - fixed typo error reported by Peter Landgren - updated api_tutorial with compilation & linking informations - fixed pedantic warning about some classes inherited from "mask" (the parent copy constructor is not called from inherited copy constructor; note that the parent class is a pure virtual class) - added Swedish translation (by Peter Landgren) - fixed typo in French translation - added a const_cast statment to avoid compilation warning under some systems - fixed problem on solaris where the TIME&MIN non canonical parameters for terminal are not set by default to 1 and 0 (but to 4 and 0), leading keyboard interaction to be impossible when dar needs user interaction. - added O_BINARY to open() mode in dar_cp, without this will cause some problem under Cygwin. from 2.2.0 to 2.2.1 - fixed execution problem for testing programs - added control code to avoid the "endless loop" warning when -M is used and root and archive are not located on the same filesystem - replaced an internal bug report by a more appropriate exception (elastic.cpp line 191) - fixed bug #31 - fixed bug #32 - fixed bug #33 - changed exception type when dar_manager's -D option does not receive an integer as argument - fixed bug #34 - added Wesley Leggette's patch to API tutorial - fixed inconsistencies concerning Native Language Support in Dar - added gettext NLS domain switch when incoming and exiting from libdar - fixed bug #30 - changed the way ctermid() system call is used - updated FAQ from 2.1.x to 2.2.0 - caching read/write for catalogue to drop the number of Context Switches. - added -aSI and -abinary options - added -Q option - added -G option - fixed a display bug about archive size, present when listing with -v option - added -aa / -ac options - added -M option - thread safe support for libdar - added -g option - added -am option - added -acase / -an options - user_interaction can now be based on customized C++ class - user_interaction_callback now have a context argument - added feature: dar_manager now restores directory tree recursively - added feature: dar_manager can receive a range of archive number with -D option - added summary at the end of configure script - added -j option (--jog) change behavior when virtual memory is exhausted - added Native Language Support - added feature that proposes removal of slices of an older archive of same basename - libz is now optional - libbz2 is now optional - added openssh's libcrypto dependency - added blowfish strong encryption - changed archive format number (version "04"), difference occures only when encryption is used - moved libdar operations (archive creation, listing, testing ...) as method of the C++ archive class - added thread cancelation routine - added feature : password can be read out of command-line (interactively at execution time). - added programming documentation (thanks to Doxygen) - optimize CRC computation for speed - added warning telling [list of path] is deprecated (better use -g option) - added Todd Vierling's patch for dar to compile under Interix from 2.1.5 to 2.1.6 - fixed compilation problem with gcc-3.x for dar64 - updated libtool to generate the configure script - fixed old info in dar's man page from 2.1.4 to 2.1.5 - added protection code against bad_alloc exception - new configure option to bypass libdl test - removed expected exception list in deci, limitint, real_infinint and storage modules to improve global robustness - remove the #pragma implementation/interface directives which tend today to become obsolete, and seems to be the cause of compilation problem on (recent) Linux kernel 2.6.7 for example. - added protection code to report bug conditions - code simplification for filesystem reading (while performing backup) - fixed bug #29 - fixed code syntax to support gcc-3.4.x from 2.1.3 to 2.1.4 - fixed bug #27 - improved limitint detection overflow - fixed bug #28 from 2.1.2 to 2.1.3 - fixed namespace visibility inconsistency for several call to open() - added "list:" key in conditionnal syntax, to stay coherent with man page - optimized dar_cp algorithm for speed in case I/O error - made dar_cp more talkative about events that succeed while copying data - fixed bug #25 - fixed bug #26 from 2.1.1 to 2.1.2 - fixed bug #24 - added "-w d" option which is equivalent to -w but necessary when dar is not compiled with GNU getopt - updated documentation about GNU getopt() vs non GNU getopt() - update configure script to have libgnugetopt auto-detection from 2.1.0 to 2.1.1 - fixed configure script warning when an include file is "present but cannot be compiled" - fixed bug #21 - fixed bug #22 - dar_xform and dar_slave now send their help usage on stdout (instead of stderr) - fixed typo in error message from 2.0.x to 2.1.0 - fixed bug #17 - API version 2 documentation - API version 2 implementation - -E and -F can now be present several time on command line and/or included files (dar, dar_slave and dar_xform) - context (%c in -E and -F) is now transmitted in the pipes from dar to dar_slave - added -wa option - added -as option - added -e option - updated the API to be able to add new encryption protocol later - root (-R argument) can now be a symbolic link pointing to a directory - fixed bug #17bis - added information returned by the system when error during read() to the message returned to the user - fixed bug #18 - documentation about filter mechanism added - fixed bug #19 - don't fail for a file if permission could not be restored - fixed bug #20 - configure script does not mess with CXXFLAGS or CFLAGS execpt when using debugging options. from 2.0.3 to 2.0.4 - updated autoconf version used to generate configure script (2.57 -> 2.59) The large file support is back with gcc-3 (was only working with gcc-2) from 2.0.2 to 2.0.3 - fixed bug #20 from 2.0.1 to 2.0.2 - fixed bug #18 - fixed bug #17bis - documentation about filter mechanism added - fixed bug #19 from 2.0.0 to 2.0.1 - fixed bug #17 from version 1.3.0 to 2.0.0 - using configure script (built with automake & autoconf) - creation of the libdar library - API for libdar (version 1) - updating TUTORIAL - added chapter in NOTES for ssh / netcat use with dar - added -H option - making documentation for API : DOC_API - speed optimization for dar_manager - enclosed libdar sources in libdar namespace - added libdar dynamic library support (using libtool) - fixed bug in ui_printf. Bug appeared with the shell_interaction split from user_interaction (for libdar) - fixed bug in dar_manager when creating empty database - changed hourshift implementation (no static variable used anymore) - changed code not to have dynamic allocation to take place before main() was called - added compilation time option to replace infinint by 32 bits or 64 bits integers - added special memory allocation (--enable-special-alloc) to better handle many small dynamic objects (in the meaning of OOP). - fix. Dar_manager does no more send all its output to stderr, just interactive messages are sent there. - changed "dar_manager -u" do not display anymore files present in the archive which have not saved data or EA in the asked archive. - removed displaying of command-line used for backup ("dar -v -l ...") as it is no more becoming inaccurate due to include files and as it would consume too much space if it has to be developed. - added sample scripts for using dar with Parchive - now displaying option read from configuration files when using -v option - added %e and %c for user script parameters - using UPX to compress binary if available at compilation time - removed comments put by mistake in 1.3.0 around warning when try to backup the archive itself. This revealed a bug, which made the warning be issued in some wrong cases. - removed this previous warning when creating archive on the stdout - fixed bug #15 - fixed error in libdar sanity checks, where exceptions were not raised (due to the lack of the "throw" keyword) - fixed bug #16 - changed order of argument passed to dar by dar_manager, for the -x be before any other option (in particular -B options). from version 1.2.1 to 1.3.0 - added parenthesis for a warning to be able to show, when opening a scrambled archive - fixed bug #10 - added feature : --flat option - improved slice name detection when given in place of archive basename - added feature : comments in the configuration file given to -B (see man page for more). - added feature : --mincompr option - fixed a display error when listing a hard link (the name of the first hard link seen on an inode was displayed in place of the name of each hard link). This did not concern the tree (-T option) listing. - added standard config files ~/.darrrc and /etc/darrc config files - conditional statements in included files (using make-like targets) - added feature : --noconf option - fixed a bug : warning message issued when th user asks for dar to backup the archive in itself, was not displayed in some cases. - fixed bug #11 - added total files counter in each archive while listing dar_manager database - fixed bug #12 - improved slicename versus basename substitution warning and replacement. - changed internal name generation to avoid using std::sstream class - bzip2 compression implemented (need libbz2 library at compilation time) - added the --nodump feature - fixed bug #13 - configuration file can have DOS or UNIX text formating - now closing files before asking for the last slice, this allow un-mounting filesystem in that case. from version 1.2.0 to version 1.2.1 - minor change to have backward compatibility with old archive (dar < 1.2.0) generated on 64 bits OS (have to use OS_BITS=32 in Makefile on 64 bits OS). - adapted Axel Kohlmeyer's patch for RPMS - adapted Dietrich Rothe's patch for compression level : -z has an optional argument which is compression level to use. - I and -X now available while listing archive contents (-l) - based on Brian May's patch, dar with EA_SUPPORT avoids complaining when reading a filesystem that do not supports EA. - based on Brian May's other patch, dar now uses by default the integers. - dar is now built with dynamic linking, and a special version named dar_static which is statically linked is also available - fixed problem on Windows NT & 2000 (fixed by first change above) from version 1.1.0 to version 1.2.0 - -P option can now accept wild cards - changed dar output format when listing archive contents to have something more similar to the output of tar. -T is provided to get the previous tree listing - fixed bug #6 - user interaction is now possible even if standard input is used (for pipe) - fixed bug #7 - added some missing #include files for compilation under Windows using Cygwin - added feature to display name of user and group (when possible) in place of uid and gid while listing archive contents. - added the possibility to launch command between slices (-E and -F options) for dar, dar_xform and dar_slave. - when saving or comparing a directory tree, DAR goes transparently in subdirectory not modifying the last_access date of each directory. - usage text (displayed by -h option) is now generated from xml file thanks to Chris Martin's little software named dar-help - fixed bug concerning the uninstallation of man pages - changed the place where man pages and documentation go /usr/share/doc usr/share/man in place of /usr/doc and /usr/man for the RPM package (conform to Filesystem Hierarchy Standard) - changed the place where documentation goes for /usr/local/doc to /usr/local/share/doc by default. (Thanks to Jerome Zago) (conform to Filesystem Hierarchy Standard) - added scrambling features (-J and -K options) - added selective compression (-Y and -Z options) - added third state for saved data to keep trace in an extracted catalogue of what is saved in the reference archive (this opens the door to the archive manager) - added the ability to read configuration file (-B option, -B like "batch"). - if a slice name is given in place of a base name, dar proposes to change to the correct base name (strips the extension number and dots). - fixed bug #8 - added dar_manager command-line program - replaced integer types by macro that can be adapted to have correct behavior on 64 bits platform (in particular to read archive from other platforms). from version 1.0.0 to version 1.1.0 - added feature: now ignored directory are not stored at all in the archive unless -D option is used, in which case ignored directory are recorded as empty directory (as it was in 1.0.x) - added support for hard links. Generated archive format version is now 02, but format 01 can still be read, and use as reference. - fixed bug #1 - fixed bug #2 - fixed bug #3 - added feature: restore only more recent file than existing one (-r option) - added feature: support for Extended Attributes (activated at compilation) - added feature: verbose option (-v) with -l (add archive contents) - modified behavior: -l option without -v is no more interactive - added feature: archive integrity test (option -t). CRC have been added in the archive (format 02), thus even without compression Dar is able to detect errors. - added feature: comparison with filesystem (difference) (option -d) - modified behavior: non interactive messages goes to stdout, while those asking user, goes to stderr (all goes to stderr if stdout is used for producing the archive, or for sending orders do dar_slave. - added feature: DAR automatically goes in non interactive mode if no terminal is found on standard input (for example when run from crontab). In that case any question make DAR to abort. - added feature: catalogue extraction to small file: "isolation" (-C option) - added feature: archive produced on stdout when using -c or -C with "-" as filename - added feature: -V option summarizes version of the binary - added feature: additional command "dar_xform" to "re-slice" an archive - added feature: read archive trough a pair of pipes with the help of dar_slave - added feature: long option are now available (see man page) - fixed bug #5 - a lot of speed optimization in algorithm - changed exit codes to positive values in case of error - dar returns an new error code when an operation is partially successful (some filed missed to be saved / restored / tested / compared). - replace the use of vform() method by a customized simple implementation in the ui_printf() routine, this should now allow compilation with gcc-3 - changed long option that used an underscore character (`_') by a dash ('-') - added -O option to have a better behavior when used with non root user - added 'make doc' option in the makefile from version 1.0.2 to version 1.0.3 - bug #5 fixed from version 1.0.1 to version 1.0.2 - bug #2 fixed - bug #3 fixed from version 1.0.0 to version 1.0.1 - correction of few mistakes that lead the compilation to fail with certain C++ compilers - bug #1 fixed. dar-2.7.17/doc/0000755000175000017520000000000014767510034010135 500000000000000dar-2.7.17/doc/dar-catalog.dtd0000644000175000017520000000645314403564520012734 00000000000000 dar-2.7.17/doc/benchmark.html0000644000175000017520000017776414403564520012717 00000000000000 Benchmarking backup tools
DAR's Documentation

Benchmarking backup tools

Introduction

This document has for objective to compare common backup tools under Unix (Linux, FreeBSD, MACOS X...), among the most commonly available today.

  • The first target we want to address is being able to copy a directory tree and files with the best fidelity,
  • The second target is being able to backup and restore a whole system from a minimal environment without assistance of an already existing local server (disaster context).
  • The third target is being able to securely keep for the long term an archived data. Securely here means having the ability to detect data corruption and limit its impact on the rest of the archive.

Depending on the targets we may need compression and/or ciphering inside backup, but also denpending on the context (public cloud storage, removable media, ...), limited storage space.

Backup softwares that requires servers already running on the local network (For examples Bacula, Amanda, Bareos, UrBackup, Burp...) cannot address our second target as we would have first to reconstruct such server in case of disaster (from what then?) in order be able to restore our system and its data. They are over complex for the first target and are not suitable for the third.

Partition cloning systems (clonezilla, MondoRescue, RescueZilla, partclone, dump and consorts) are targetted at block copy and as such cannot backup a live system: you have to shutdown and boot on a CD/USB key or run in single user-mode in order to "backup". This cannot be automated and has a strong impact on the user as she/he has to interrupt her/his work during the whole backup operation.

Looking at the remaining backup tools, with or without Graphical User Interface, most of them rely on one of the three backend softwares, tar, rsync and dar:

  • Software based on dar: gdar, DarGUI, Baras, Darbup, Darbrrd, HUbackup, SaraB...
  • Software based on rsync: TimeShift, rsnapshot...
  • Software based on tar: BackupPC, Duplicity, fwbackups...

We will thus compare these three softwares for the different test famillies described below.

Tests Famillies

Several aspects are to be considered:

  • completness of the restoration: file permissions, dates precision, hardlinks, file attributes, Extended Attributes, sparse files...
  • main features around backup: differential backup, snapshot, deduplication, compression, encrytion, file's history...
  • robustness of the backup: how data corruption impact the backup, how it is reported...
  • execution performance: execution time, memory consumption, multi-threading support...
  • Benchmark Results

    The results presented here are a synthesis of the test logs. This synthesis is in turn summarized one step further in conclusion of this document.

    Completness of backup and restoration

    Software plain file symlink hardlinked files hardlinked sockets hardlinked pipes user group perm. ACL Extended Attributes FS Attributes atime mtime ctime btime Spares File Disk usage optimization
    Dar yes yes yes yes yes yes yes yes yes yes yes yes yes - yes(1) yes yes
    Rsync yes yes yes yes yes yes yes yes yes(4) yes(5) - - yes - yes(1) yes(6) yes(6)
    Tar yes yes yes - (2) - yes yes yes yes(7) yes(8) - - yes(3) - yes(1) yes(6) -
    • (1) "Yes" under MACoS X, FreeBSD and BSD systems. As of today (year 2020), Linux has no way to set the btime aka birthtime or yet creation time
    • (2) tar does even not save and restore plain normal sockets, but that's not a big issue in fact as Unix sockets should be recreated by the applications that provide the corresponding service
    • (3) unless --xattrs is provided, mtime is saved by tar but with an accuracy of only 1 second, while today's systems provide nanosecond precision
    • (4) needs -A option
    • (5) needs -X option
    • (6) needs -S option
    • (7) needs --acl option
    • (8) needs --xattrs option

    See the test logs for all the details.

    Feature set

    In addition to the exhaustivity of the restored data (seen above), several features are a must have when creating backups. Their description and what they bring to a backup process is given below, followed by a table of how they are supported on the different softwares under test:

    Historization
    Historization is the ability to restore a deleted file even long after the mistake has been made by rotating backups over an arbitrary large number of backup set. Having associated tools to quickly locate the backup where resides a particular file's version becomes important when the history increases. Historization can be done with only full backups, but of course better leverages differential and incremental backups.

    Data filtering
    Not all files need to be saved:
    • some directories (like /tmp, /proc, /sys, /dev, /home/*/.cache) are useless to save
    • some files based on their name or part of their name --- their extension for example, (like emacs's backup files *~ or your music files*.mp3 you already have archives somewhere, and so on) need not to be saved neither.
    • You may wish to ignore files located one or more particular mounted filesystem, or at the opposite, only consider certains volume/disk/mounted filesystem and ignore all others, and have different backup rotation cycles for those.
    • You may also find better to tag files one by one (manually by mean of an automated process of your own), to be excluded from or included in the backup
    • Instead of tagging you could also let a process define a long file listing to backup and/or to ignore.
    • Last, you may well need a mix of several of these mechanisms at the same time

    Slicing (or multi-volume)
    Having a backup split into several files of given max size can address several needs:
    • hold the backup on several removal media (CD, DVD, USB keys...) smaller than the backup itself
    • transfer the backup from a large space to another by mean of a smaller removable media
    • transfer the backup over the network and recover at the last transmitted slice rather than restarting the whole transfer in case of network issue
    • store the backup int the cloud where the provider limits the file size
    • be able to restore a backup on a system where storage space cannot hold both the backup and the restored system
    • transfer back from the cloud only a few slices to restore some files, when cloud provider does not provide adhoc protocols (sftp, ftp, ...) but only a user web based interface
    Of course, multi-volume is really interesting if you don't have to concatenate all the slices to be able to have a usable backup.

    Last the previously identified use cases for backup slicing turn around limited storage space, thus having compression available when multi-volume is used is a key point here.

    Symmetric strong encryption
    Symmetric strong encryption is the ability to cipher a backup with a password or passphrase and use that same key to decipher it. Some well known algorithms in this area are AES, blowfish, camellia...
    Symmetric strong encryption is interesting for the following cases:
    • if your disk is ciphered, would you store your backup in clear on the cloud?
    • you do not trust your cloud provider to not inspect your data and make marketing profile of yourself with it.
    • You want to prevent your patented data or industrial secret recipies from falling into the competition's hands or goverment agencies that could clone it without fear of being prosecuted. This use case applies whether your backup is stored on local disk, removable media or public cloud.
    • Simply because in your country, you have the right and the freedom to have privacy.
    • Because your today democratic country could tomorrow verse into a dictatorship and based on some arbitrary criteria, (belief, political opinion, sexual orientation...) you could suffer tomorrow from having this information having been accessible today to the authorities or even having been publicly released, while you still need backup using arbitrary storage medium.

    Asymmetric strong encryption
    Asymmetrical strong encryption is the ability to cipher a backup with a public key and having the corresponding private key for deciphering it (PGP, GnuPG...).
    Asymmetric encrypion is mainly interesting when exchanging data over Internet between different persons, or eventually for archiving data in the public cloud. Having it for backup seems not appropriate and is more complex than symmetric strong encryption, as restoration requires the private key, which thus must be stored outside the backup itself still be protected from unauthorized access. The private key use can still be protected with a password or a passphrase but this gives the same feature level as symmetrical encryption with a more complex process and not much more security.

    Protection against plain-text attack
    Ciphering data must be done with a minimum level of security, in particular when the ciphered data has well defined structure and patterns, like a backup file format is expected to have. Knowing such expected structure of the clear data may lead an attacker to undisclose the whole ciphered data. This is known as plain-text attack.

    Key derivation function
    • Using the same password/passphrase for different backups is convenient but not secure. Having a key derivation function using a salt let you use the same password/passphrase while the data will be encrypted with a different key each time, this is the role of the Key Derivation Function (KDF) (PKCS5/PBKDF2, Argon2...).
    • Another need for a KDF is that usually the human provided password/passphrase are weak: Even when we use letters, digits and some special characters, passwords and passphrases are still located in a small area of possible keys that a dictionnary attack can leverage. As the KDF is also by design CPU intensive, it costs a lot of effort and time to an attacker to derive each word of a dictionnary to its resulting KDF transformed words. The required time to perform a dictionnary attack can thus be multiplied by several hundred thousand times, leading to an effective time of tens of years and even centuries rather than hours or days.

    File change detection
    When backing up a live system, it is important to detect, retry saving or flag files that changed during the time they were read for backup. In such situation, the backed file could be recorded in a state it never had: As the backup process reads sequentially from the beginning to the end, if a modification A is done at the end of file then a modification B is made at its beginning during this file's backup, the backup may contain B and not A while at not time the file contained B without A. Seen the short time a file can be read, time accuracy of micro or nanoseconds is mandatory to detect such file change during a backup process, else you will screw up your data in the backup and have nothing to rely on in the occurence of a deleted file by mistake, disk crash or disaster.
    At restoration time, if the file has been saved anyway, it should be good to know the such file was not saved properly, maybe restoring a older version but a sane one would be better. Something the user/sysadmin cannot guess if the backup does not hold such type of information.

    Multi-level backup
    Multi-level backup is the ability to make use of full backups, differential backups and/or eventually incremental backups.
    The advantage of differential and incremental backups compared to full ones is the much shorter time they require to complete and the reduces storage space and/or bandwidth they imply when transfered over the network.

    Binary delta
    Without binary delta, when performing a differential or incremental backup, if a file has changed since the previous backup, it will be resaved entirely. Some huge files made by some well know applications (mailboxes for example) would consume a lot of storage space and lead to a long backup time even when performing incremental or differential backups. Binary delta is the ability to only store the part of a file that changed since a reference state, this lead to important space gain and reduction of the backup duration.

    Detecting suspicious modifications
    When performing a backup based on a previous one (differential, incremental, decremental backups), it is possible to check the way the metadata of saved files have changed until then and warn the user when some uncommon pattens are met. Those may be the trace of a rootkit, virus, ransomware or trojan, trying to hide its presence and activities.

    Snapshot
    A snapshot is like a differential backup made right after the full backup (no file has changed): it is a minimal set of information that can be used to:
    • create an incremental or differential backup without having the full backup around or more generally the backup of reference: When backup are stored remotely, snapshot is a must.
    • compare the current living filesystem with a status it had at the time the snapshot was made
    • bring some metadata redundancy and repairing mean to face a corrupted backup

    On-fly hashing
    On-fly hashing is the ability to generate a hashing of the backup at the same time it is generated and before it is written to storage. Such hash can be used to:
    • validate a backup has been properly transfered to a public storage cloud having hash computation done in parallel
    • check that no data corruption has occured (doubt about disk or memory) even when the backup is written to local disk
    Hashing validation is usually faster than backup testing or backup comparison, though it does not validate your ability to rely on the backup as deeply as these later operations. Hashing can be made after the backup has been completed but it will need to re-read the whole backup and you will have to wait for the necessary storage I/O for the operation to complete. On-fly hashing should leverage the fact the data is in memory so it saves the corresponding disk I/O and corresponding latency, thus it is much faster. As it is also done in memory it can help detect file corruption on the backup destination media (like USB keys or poor quality hardware).

    Run custom command during operation
    For an automated backup process, it is often necessary to run commands before and after the backup operation itself. But also during the backup process. For example, when entering a directory, one could need to run an arbitrary command generating a file that will be included in the backup. Or while exiting such directory performing some cleanup operation in that same directory. Another use case is found when slicing the backup, by the ability to perform after each slice generation a custom operation like uploading the slice to cloud, burning to DVD-/+RW, loading a tape from a tape library...

    Dry-run execution
    When tuning a backup process, it is often necessary to verify quickly that all will work flawlessly without having to wait for a backup to complete, consume storage resource and network bandwidth.

    User message within backup
    Allowing the user to add an arbitrary message within the backup may be useful when the filename is too small to hold the needed information (like the context the backup or archive was made, hint for the passphrase... and so on).

    Backup sanity test
    It is crutial in a backup process to validate that the generated backup is usable. There are many reasons it could not be the case, from a data corruption in memory, on disk or over the network ; a disk space saturation leading to truncated backup, down to a software bug.

    Comparing with original data
    One step further for backup and archiving validation is compairing file content and metadata with the system it has.

    Tunable verbosity
    When a backup process is in production and works nicely, it is usually interesting to have the minimal output possible for that any error still be possible to log. While when setting up a backup process, having more detailed information is required to understand and validate that the backup process follows the expected path.

    Modify the backup's content
    Once a backup has been completed, you might notice that you have saved extra files you ought not to save. Being able to drop them from the backup to save some space without having to restart the whole backup may lead to a huge time saving.

    You might also need to add some extra files that were outside the backup scope, having the possibility to add them without restarting the whole backup process may also lead to a huge time saving.

    Stdin/stdout backup read/write
    Having the ability to pipe the generated backup to an arbitrary command is on of the ultimate key of backup software flexibility.

    Remote network storage
    This is the ability to produce directly a backup to a network storage without using local disk, and to be able to restore directly reading a backup from the such remote storage still without using local storage. Network/Remote storage is to be understood as remote network storage like public cloud, private cloud, personal NAS... that are accesible from the network by mean of a file transfer protocols (scp, sftp, ftp, rcp, http, https...)

    Feature Dar Rsync Tar
    Historization Yes - Yes
    Data filtering by directory Yes Yes Yes
    Data filtering by filename Yes Yes limited
    Data filtering by filesystem Yes limited limited
    Data filtering by tag limited - -
    Data filtering by files listing Yes yes limited
    Slicing/multi-volume Yes - limited
    Symmetric encryption Yes - Yes
    Asymmetric encryption Yes - Yes
    Plain-text attack protection Yes - -
    PBKDF2 Key Derivation Function Yes - -
    ARGON2 Key Derivation Function Yes - -
    File change detection Yes - limited
    Multi-level backup Yes - Yes
    Binary delta Yes Yes -
    Detecting suspicious modifications Yes - -
    Snapshot for diff/incr. backup Yes - Yes
    Snapshot for comparing Yes - -
    Snapshot for redundancy Yes - -
    On-fly hashing Yes - -
    Run custom command during operation Yes - limited
    Dry-run execution Yes Yes -
    User message within backup Yes - -
    Backup sanity test Yes - Yes
    Comparing with original data Yes - Yes
    Tunable verbosity Yes Yes limited
    Modify the backup's content Yes Yes limited
    Stdin/stdout backup read/write Yes - Yes
    Remote network storage Yes limited Yes

    The presented results above is a synthesis of the test logs

    Robustness

    The objective here is to see how a minor data corruption can impacts the backup. Such type of corruption (a single bit invertion) can be caused by network transfert, cosmic particle hitting the memory bank, or simply due to the time passing stored on a particular medium. In real life data corruption may impact more than one bit, right. But if the ability to workaround the corruption of a single bit does not bring any information about the ability to recover larger volume of data corruption, the inability to recover a single bit, is enough to know that the same software will behave even worse when larger portion of data corruption will be met.

    Behavior Dar Rsync Tar alone Tar + gzip
    Detects backup corruption Yes - - Yes
    Warn or avoid restoring corrupted data Yes - - Yes
    Able to restore all files not concerned by the corruption Yes Yes Yes -

    To protect your data, you can go one step further computing data redundancy with Parchive on top of your backup or archives. This will allow you to repair them in case of corruption.

    • Though, rsync is not adapted to that process as creating a global redundancy of a directory tree is much more complex and error-prone. At the opposite, tar and dar are pretty well adapted as a backup may be a single file or a few big files if using slicing or multi-volume backup.
    • Second, whatever is the redundancy level you select, if the data corruption exceed this level, you will not be able to repair your backups and archives. Thus, better relying on a robust and redundant backup file structure, and here dar has some big advantages.
    • Last, if execution time is important for you, having a sliced backup with a slice size smaller than the available RAM and running Parchive right after each slice created, will save a lot of disk I/O and can speed up the overall process by more than 40%. But here too, only dar provides this possibility.

    The presented results above is a synthesis of the test logs.

    Performance

    In the following, we have distinguished two purposes of backup tools: the "identical" copy of a set of files and directories (short term operation) and the usual backup operation (long term storage and historization).

    Performance of file copy operation

    The performance aspect to consider for this target is exclusively the execution speed, this may imply data reduction on the wire only if the bandwidth is low enough for the compression processing time added does not ruine the gain on transfer time. Compression time is not dependent on the backup tool but on the data, and we will see in the backup performances tests, the way the different backup tools do reduce data on the wire. For the execution time we get the following results:

    Single huge file

    The copied data was a Linux distro installation ISO file

    cp: 2.58 s
    Dar: 9.18 s
    Rsync: 15.28 s
    Tar: 6.51 s
    Linux system

    The copied data was a fresh fully featured Linux installed system

    cp: 5.15 s
    Dar: 16.78 s
    Rsync: 16.59 s
    Tar: 8.04 s
    Conclusion

    for local copy cp is the fastest but totally unusable for remote copy. At first sight one could think tar would be the best alternative for remote copy, but that would not take into account the fact you will probably want to use secured connection (unless all segments of the underlying network are physically yours, end to end). Thus once the backup will be generated, using tar will require an extra user operation, extra computing time to cipher/decipher and time to transfer the data while both alternatives, rsync and dar, have it integrated: they can copy and transfer at the same time, with both the gain of time and the absence of added operations for the user.

    In consequence, for remote copy, if this is for a unique/single remote copy, dar will be faster than rsync most of the time (even when using compression to cope with low bandwidth, see the backup test results, below). But for recurring remote copy even if rsync is not faster that dar, it has the advantage of being designed espetially for this task as in that context we do not need to store the data compressed nor ciphered. Things we can summarize as follows:

    Operation Best Choice Alternative
    Local copy cp tar
    One-time remote copy dar rsync
    recurrent remote copy rsync dar

    See the corresponding test logs for more details

    Performance of backup operation

    For backup we consider the following criteria by order of importance:

    1. data reduction on backup storage
    2. data reduction when transmitted over the network
    3. execution time to restore a few files
    4. execution time to restore a full and differential backups
    5. execution time to create a full and differential backups

    Why this order?

    • Because usually backup creation is done at low priority in background and on a day to day basis, the execution time is less important than reducing the storage usage: reducing storage usage gives longer backup history and increases the ability to recover accidentically removed files much later after the mistake has been done (which may be detected weeks or months afterward).
    • Next, while your backup storage can be anything, including low cost or high end dedicated one, we see more and more frequently externalized backups, which main declinaison is based on public cloud storage, leading to relatively cheap disaster recovery solution. However, your WAN/Internet acces will be drained by the backup volumes flying away and you probably don't want them to consume too much of this bandwidth which could slow down your business or Internet access. As a workaround, one could rate-limit the bandwidth for backup exchanges only. But doing so will extend the backup transfer time so much that you may have to reduce the backup frequency to not have two backups transfered at the same time. This would lead you to lose accuracy of saved data: A too low backup frequency will only allow you to restore your systems in the state they had several days instead of several hours or several tens of minutes, before the disaster occured. For that reason data reduction on the wire is the second criterium. Note that data reduction on storage usually implies data reduction on the wire, but the opposite is not always true, depending on the backup tool used.
    • Next, it is much more frequent to have to restore a few files (corrupted or deleted by mistake) and we need this to be quick because this is an interactive operation and that the missing data is mandatory to go forward for one's work, which workflow may impact several other persons.
    • The least frequent operation (hopefully) is the restoration of a whole system in case of disaster. Having it performing quick is of course important, but less than having a complete, robust, accurate and recent backup somewhere, that you can count on to restore your systems in the most recent possible state.

    Note that the following result do not take into account the performance penalty implied by the network latency. Several reasons to that:

    • it would not measure the software performance but the network bandwidth and latency which is not the object of this benchmark and may vary with distance, link layer technology and number of devices crossed,
    • We can assume the network penalty to be proportional to data processed by each software, as all protocol used are usually TCP based (ftp, sftp, scp, ssh, ...), which performance is related to the operating system parameters (window size, MTU, etc.) not to the backup software itself. As we only rely on tmpfs filesystems for this benchmark to avoid mesuring the disk I/O performance, we may approximate that a network latency increase or a reduction of network bandwidth would just inflate the relative execution time of the different tested softwares in a linear manner. In other words, adding network between system and backup storage should thus not modify the relative performances of the softwares under test.

    For all the backup performance tests that follow (but not for file copy performance tests seen above), compression has been activated using the same and most commonly supported algorithm: gzip at level 6. Other algorithms may complete faster or provide better compression ratio, but this is linked to chosen compression algorithm and data to compress, not to the backup tools tested here.

    Data reduction on backup storage

    Full backup
    Dar: 1580562224 bytes
    Dar+sparse: 1578428790 bytes
    Dar+sparse+binary delta: 1602481058 bytes
    Rsync: 4136318307 bytes
    Rsync+sparse: 4136318307 bytes
    tar: 1549799048 bytes
    tar+sparse: 1549577862 bytes
    Differential backup
    Dar: 49498524 bytes
    Dar+sparse: 49505251 bytes
    Dar+sparse+binary delta: 23883368 bytes
    Rsync: not supported
    Rsync+sparse: not supported
    tar: 44607904 bytes
    tar+sparse: 44604194 bytes
    Full + Differential backup

    This is a extrapolation of the required volume for backup, after one week of daily backup of the Linux system under test, assuming the activity is as minimal each day as it was here between the initial day of the full backup and the day of the first differential backup (a few package upgrade and no user activity).

    Dar: 1927051892 bytes
    Dar+sparse: 1924965547 bytes
    Dar+sparse+binary delta: 1769664634 bytes
    Rsync: not supported
    Rsync+sparse: not supported
    tar: 1862054376 bytes
    tar+sparse: 1861807220 bytes

    This previous results concerns the backup of a steady Linux system, relative difference of data reduction might favorize both rsync and dar+binary delta when the proportion of large files being slightly modified increases (like mailboxe files).

    Data reduction over network

    Full backup
    Dar: 1580562224 bytes
    Dar+sparse: 1578428790 bytes
    Dar+sparse+binary delta: 1602481058 bytes
    Rsync: 1587714486 bytes
    Rsync+sparse: 1587714474 bytes
    tar: 1549799048 bytes
    tar+sparse: 1549577862 bytes
    Differential backup
    Dar: 49498524 bytes
    Dar+sparse: 49505251 bytes
    Dar+sparse+binary delta: 23883368 bytes
    Rsync: 29293958 bytes
    Rsync+sparse: 29293958 bytes
    tar: 44607904 bytes
    tar+sparse: 44604194 bytes
    Full + Differential backup

    This is the same extrapolation done above (one week of daily backup), but for the volume of data transmitted over the network instead of the backup volume on storage.

    Dar: 1927051892 bytes
    Dar+sparse: 1924965547 bytes
    Dar+sparse+binary delta: 1769664634 bytes
    Rsync: 1792772192 bytes
    Rsync+sparse: 1792772180 bytes
    tar: 1862054376 bytes
    tar+sparse: 1861807220 bytes

    Execution time to restore a few files

    Dar: 0.98 s
    Dar+sparse: 1.13 s
    Dar+sparse+binary delta: 1.27 s
    Rsync: 3 ms
    Rsync+sparse: 3 ms
    tar: 25.15 s
    tar+sparse: 25 s

    Here the phenomenum is even more important when the file to restore is located near the end of the tar backup, as tar sequentially reads the whole backup up to the requested file.

    Execution time to restore a whole system - full backup

    Dar: 22.94 s
    Dar+sparse: 30.36 s
    Dar+sparse+binary delta: 30.35 s
    Rsync: 157.81 s
    Rsync+sparse: 158.39 s
    tar: 26.72 s
    tar+sparse: 26.27 s

    Execution time to restore a single differential backup

    Dar: 3.48 s
    Dar+sparse: 3.48 s
    Dar+sparse+binary delta: 3.44 s
    Rsync: not supported
    Rsync+sparse: not supported
    tar: 1.48 s
    tar+sparse: 1.5 s

    Execution time to restore a whole system - full + differential backup

    We use here the same extrapolation of a week of daily backup done above: the first backup being a full backup and differential/incremental backups done the next days.

    Clarifying the terms used: the differential backup saves only what has changed since the full backup was made. The consequence is that each day the backup is slightlty bigger to process, depending on the way data changed (if all files change every day, like mailboxes, user files, ...) each new differential backup will have the same size and take the same processing time to complete. At the opposite, if new data is added each day, the differential backup size will be each day the sum of the incremental backups that could be done instead since the full backup was made.

    At the difference of the differential backup, the incremental backup saves only what has changed since the last backup (full or incremental). For constant activity like the steady Linux system we used here, the incremental backup size should be the same along the time (and equivalent to the size of the first differential backup), thus the extrapolation is easy and not questionable: the restoration time is the time to restore the full and the time to restore the first differential backup times the number of days that passed.

    Execution time to restore a whole system - lower bound

    The lower bound, is the sum of the execution time of the restoration of the full backup and one differential backup seen just above. It corresponds the minimum execution time restoring a whole system from full+differnential backup.

    Dar: 26.42 s
    Dar+sparse: 33.84 s
    Dar+sparse+binary delta: 33.79 s
    Rsync: full backup only 157.81 s
    Rsync+sparse: full backup only 158.39
    tar: 28.2 s
    tar+sparse: 27.77 s

    Execution time to restore a whole system - higher bound

    The higher bound, is the sum of the execution time of the restoration plus seven times the execution time of the differential backup. It corresponds the worse case scenario where each day new data is added (still using a steady Linux system with constant activity). It also corresponds the scenario of restoring a whole system from a full+incremental backups (7 incremental backup have to be restored, in that week span scenario):

    Dar: 47.3 s
    Dar+sparse: 54.72 s
    Dar+sparse+binary delta: 54.43 s
    Rsync: full backup only 157.81 s
    Rsync+sparse: full backup only 158.39
    tar: 37.08 s
    tar+sparse: 36.77 s

    Execution time to create a backup

    Dar: 149.73 s
    Dar+sparse: 157.99 s
    Dar+sparse+binary delta: 162.62 s
    Rsync: 156.98 s
    Rsync+sparse: 183.44 s
    tar: 148.59 s
    tar+sparse: 149.38 s

    Ciphering/deciphering performance

    There is several reasons that implies the need of ciphering data:

    • if your disk is ciphered, would you store your backup in clear on the cloud?
    • do you trust your cloud provider to not inspect your data for marketing profiling?
    • Are you sure your patented data, secret industrial recipies will not be used by competition?
    • and so on

    The ciphering execution time is independent on the nature of the backup, full or differential, compressed or not. To evaluate the ciphering performance we will use the same data sets as previously, both compressed and uncompressed. However not all software under test are able to cipher the resulting backup. rsync is not able to do so.

    Full backup+restoration execution time
    Dar: 9.13 s
    Rsync: N/A
    Tar (openssl): 7.39 s
    Execution time for the restoration of a single file
    Dar: 0.42 s
    Rsync: N/A
    Tar (openssl): 1.79 s
    Storage requirement ciphered without compression
    Dar: 1.46 GiB
    Rsync: N/A
    Tar (openssl): 1.49 GiB

    See the corresponding test logs for more details.

    Conclusion

    So far we have measured different perfomance aspects, evaluated available features, tested backup robusness and observed backup exhaustivity of the different backup softwares under test. This gives a lot of information already summarized above. But it would still not be of a great use to anyone reading this document (espetially the one jumping to its conclusion ;^) ) so we have to get back to use cases and their respective requirements to obtain the essential oil drop anyone can use immediately:

    Criteria for the different use cases

    Use Cases Key Point Optional interesting features
    Local directory copy
    • execution speed
    • completness of copied data and metadata
    remote directory copy - wide network
    • execution speed
    • completness of copied data and metadata
    • on wire ciphering
    remote directory copy - narrow network
    • execution speed
    • data reduction on wire
    • completness of copied data and metadata
    • on wire ciphering
    Full backups only
    • completness of backed up data and metadata
    • data reduction on storage
    • fast restoration of a few files
    • fast restoration of a whole backup
    full+diff/incr. backup
    • completness of backed up data and metadata
    • data reduced on storage
    • fast restoration of a few files
    • fast restoration of a whole backup
    • managing tool of backups rotation
    Archiving of private data
    • data reduction on storage
    • robustness of the archive
    • ciphering
    • redundancy data
    Archiving of public data
    • data reduction on storage
    • robustness of the archive
    • signing
    • fast decompression algorithm
    Private data exchange over Internet
    • data reduction over the network
    • asymmetric encryption and signing
    • redundancy data
    • multi-volume backup/archive
    • integrated network protocols in backup tool
    Public data exchange over Internet
    • data reduction over the network
    • hashing
    • sigining
    • integrated network protocols in backup tool

    Complementary criteria depending on the storage type

    And depending on the target storage, the following adds on top:

    Use Cases Key Point Optional interesting features
    Local disk
    • execution speed
    • hashing
    Data stored on private NAS
    • data reduction on storage
    • multi-volume backup
    • integrated network protocols in backup tool
    • ciphering
    Data stored on public cloud
    • data reduction on storage and on wire
    • ciphering
    • multi-volumes backup
    • integrated network protocols in backup tool
    Data stored on removable media (incl. tapes)
    • multi-volume backup
    • data reduction on storage
    • on-fly hashing
    • ciphering
    • redundancy data

    Essential oil drop

    In summary, putting in front of these requirements the different measures we did:

    • exhasitivity of backed up data
    • available features around backup
    • backup robustness facing to media corruption
    • overall performance

    We can summarize the best software to put in front of each particular use case:

    Use Cases Local disk storage Private NAS Public Cloud Removable media
    Local directory copy
    cp
    dar not the fastest
    rsync not the fastest
    tar not the fastest
    - - -
    One time remote directory copy -
    dar
    rsyncnot the fastest
    tarno network protocol embedded
    dar
    rsyncnot the fastest
    tarno network protocol embedded
    dar
    rsyncnot the fastest
    tarno network protocol embedded
    Recurrent remote directory copy -
    darfastest but automation is a bit less straight forward than using rsync
    rsync
    tarno network protocol embedded
    darfastest but automation is a bit less straight forward than using rsync
    rsync
    tarno network protocol embedded
    darfastest but automation is a bit less straight forward than using rsync
    rsync
    tarno network protocol embedded
    Full backups only
    (private data)
    darhas the advantage to provide long historization of backups
    rsyncno data reduction on storage, slow to restore a whole filesystem
    tarnot saving all file attributes and inode types, slow to restore a few files
    dar
    rsyncno data reduction on storage
    tarnot saving all file attributes and inode types, slow to restore a few files, no network protocol embedded
    dar
    rsyncno data ciphering and no reduction on storage
    tarnot embedded ciphering, not the strongest data encryption, not saving all file attributes and inode types, slow to restore a few files, no network protocol embedded
    dar
    rsyncno multi-volume support, no data ciphering and no reduction on storage
    tarcompression and multi-volume are not supported at the same time, not saving all file attributes and inode types, not embedded ciphering, not the strongest data encryption
    full+diff/incr. backups
    (priate data)
    dar
    rsyncdifferential backup not supported, full backup is overwritten
    tarnot saving all file attributes and inode types, slow to restore a few files
    dar
    rsyncdifferential backup not supported, full backup is overwritten
    tarnot saving all file attributes and inode types, slow to restore a few files, no network protocol embedded
    dar
    rsyncdifferential backup not supported, full backup is overwritten
    tarnot embedded ciphering, not the strongest data encryption, not saving all file attributes and inode types, slow to restore a few files, no network protocol embedded
    dar
    rsyncdifferential backup not supported, full backup is overwritten, no support for multi-volime, no data reduction, no ciphering
    tarcompression and multi-volume are not supported at the same time, not saving all file attributes and inode types, not embedded ciphering, not the strongest data encryption
    Archiving of private data
    dar
    rsyncno data reduction on storage, no detection of data corruption, complex parity data addition
    tarno detection of data corruption or loss of all data after the first corruption met
    dar
    rsyncno data reduction, no detection of data corruption, complex parity data addition
    tarno detection of data corruption or loss of all data after the first corruption met
    dar
    rsyncno ciphering, no data reduction, no detection of data corruption, complex parity data addition
    tarno detection of data corruption or loss of all data after the first corruption met, no embedded ciphering, no protection against plain-text attack
    dar
    rsyncno data reduction, no multi-volume, no ciphering, no detection of data corruption, complex parity data addition
    tarcompression and multi-volume are not supported at the same time, no detection of data corruption or loss of all data after the first corruption met, no ciphering
    Archiving of public data
    darmost robust format but not as standard as tar's
    rsyncno reduction on storage
    tar
    darmost robust archive format but not as standard as tar's
    rsyncno reduction on storage, complicated to download a directory tree and files from other protocols than rsync
    tar
    darmost robust archive format but not as standard as tar
    rsyncno reduction on storage, complicated to download a directory tree and files from other protocols than rsync
    tar
    dar
    rsyncno reduction on storage, no multi-volume, no detection of data corruption, complex parity data addition
    tarcompression and multi-volume are not supported at the same time
    Private data exchange over Internet
    dar
    rsyncnot the best data reduction over the network
    tarbest data reduction on network but no embedded ciphering, no integrated network protocols
    dar
    rsyncno data reduction on storage, not the best data reduction over the network
    tarbest data reduction on network, but lack of embedded ciphering, lack of integrated network protocols
    dar
    rsyncno ciphering and no data reduction on storage
    tarno embedded ciphering, no integrated network protocols, no protection against plain-text attack, only old KDF functions supported, complex and error prone use of openssl to cipher the archive
    -
    Public data exchange over Internet
    darnot the best data reduction over the network
    rsyncnot the best data reduction over the network
    tar
    darnot the best data reduction over the network
    rsyncno data reduction on storage, not the best data reduction over the network
    tar
    darnot the best data reduction over the network
    rsyncno data reduction on storage, not the best data reduction over the network
    tar
    -

    In each cell of the previous table, the different softwares are listed in alphabetical order, they get colorized according to the following code:

    Color codes
    best solution
    good solution
    not optimal
    not adapted

    Hovering the mouse on a particular item gives more details about the reason it has not been selected as the best solution for a particular need.

    dar-2.7.17/doc/old_dar_key1.txt0000644000175000017520000000270114740171677013162 00000000000000-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org mQGiBDyl5ZwRBACkz8Pj10JsE5sLlyB0wlhsWa1TKVtpcj1d7R0S+OifT4jw4E9c 36c+zGIH4bIXhk/kwADhWH6caFOcAN5YWcauUCUMcdKHf1s3Pr0V4XNwuWa3jaIJ IAvTNxPYgMvTYvK2MrE1DnYuIZkjrpR/XOgCSSQpIM8qENUhhHIgcTkrswCgzAhM oq+QQk29gQ6K+YuZp7Qz8bMEAJJHkCt3nGHfU+vtFiNQstJ6usu62qv2AJD5Cmkk lYW5HsiE26w7w2JDrCAxs24YYIGAZKkV/05dJxWsiF/c87/XtZNsdQASSFAsKvr+ yNSxRPYvpfbLP5IX3KmKGdgX/3H1e/tZb6iLwpeKGnuMo3xOI900Vzloi8cXMz0x iy6eBACP6bjjgYjQ1E5olJKl4C4dkDVVArH7Gpa0N+bH1idESA+VkqNVz2Ydfarp k35SQ0UZJ9j432QeUrPsU/2JHqII/WyLTANFqmiUWkYFzmjS5bI/AlVyUo07mRCG qAkjBHNpd1clwifrrr15UykN3v9zgJHSEkEBD97jPmDmEGhnE7Q8RGVuaXMgQ29y YmluIChodHRwOi8vZGFyLmxpbnV4LmZyZWUuZnIvKSA8ZGFyLmxpbnV4QGZyZWUu ZnI+iFcEExECABcFAjyl5ZwFCwcKAwQDFQMCAxYCAQIXgAAKCRCkLkIjyBgaUsFT AJ4w/Lz7BSsiO5Bm+YpTnQhbTV8avwCeL4xD+0u0xhIgjNqW4FQNj9D2g8y5AUUE PKXlnhAE4Kgg3USjhKvjPL7BUbpw5/hI45vC8SWQKAfYCSjQ30i8DWa+GabVMiCy 8mwgpwTJm/9FPx0JRlS5h6fCMfhth6zSRdK0AFToThTFPXtAp+DliAy1k6e2LDwk bh6AH4rbaix+NoOxhvwVeJ0ivL4Evua9ZCJlqcb4kWTO+jwAI3nHMC5AVmB3AcpB nHT+2wcDmJpKzuOLnd2w8z2EJwADBQTfdNskxTuLvBXBqDgtHWTRnZZIRAfoLHwU zgbwVvbZpYpVXjMd+oyGKA1mz21bhZLZUDusN7TRMQFDE3KdG3hvsaVqOvuMlCEY EyHqF11DcYl4beZ1nj2tBlzT0Gc5GUx2d2lCa3xltDTULbnoxqxzxM9rLII3CBvz VRpnprCZ5x8lI9FsVXgUVQSNTJ3KFDzLXPOn7rHAo121tii/iEYEGBECAAYFAjyl 5Z4ACgkQpC5CI8gYGlIwWwCeLkW80F06uS4PljUD1ctugOHteygAn0UwSHxTeSFd Bo0ZNpPxfhXbIQKq =k+H0 -----END PGP PUBLIC KEY BLOCK----- dar-2.7.17/doc/Notes.html0000644000175000017520000037266514767477465012102 00000000000000 DAR/LIBDAR Internals - Notes  
    Dar Documentation

    Dar/Libdar Internals - Notes

    Introduction

    Here takes place a collection of notes. These have been created after implementation of a given feature, mainly for further reference but also as user information. The idea behind these notes are to remind some choices of implementation, the arguments and reasons that lead to them, but also let the user have a way to be informed about the choices done and be able to bring his remarks without having to deeply look into the code to learn dar's internals.

    EA & differential backup

    Brief presentation of EA:

    EA stands for Extended Attributes. In Unix filesystem a regular file is composed of a set of bytes (the data) and an inode (containing the metadata). The inode add properties to the file, such as owner, group, permission, dates (last modification date of the data [mtime], last access date to data [atime], and last inode change date [ctime]), etc). Last, the name of the file is not contained in the inode, but in the directory(ies) it is linked to. When a file is linked more than once in the directory tree, we speak about "hard links". This way the same data and associated inode appears several time in the same or different directories. This is not the same as a symbolic links, which later one is a file that contains the path to another file (which may or may not exist). A symbolic link has its own inode. OK, now let's talk about EA:

    Extended attributes is a recent feature of Unix file system (at the time of the writing, year 2002). They are not part of the inode, nor part of the data, nor part of a given directory. They are stored beside the inode and are a set of pair of key and value. The owner of the file can add or define any key and eventually associate it an arbitrary data value. The user has also the means to list and remove a particular key for the EA. What are they used for: Simply a more flexible way to associate information to a file.

    One particular interesting use of EA, is ACL: Access Control List. ACL are implemented using EA (Linux) and add a more fine grain in assigning access permission to file. For more information one EA and ACL, see the site of Andreas Grunbacher:

    http://acl.bestbits.at/

    File forks under MACOS X also relies on EA, as well the security features brought by SELinux.

    EA & Differential Backup

    to determine that an EA has changed, dar looks at the ctime value of the inode. if ctime has changed, (due to EA change, but also due to permission or owner change) dar saves the EA. ctime also changes, if atime or mtime changes. So if you access a file or modify it, dar will consider that the EA have changed also. This is not really fair, I admit.

    It may sound better would be to compare EA one by one, and record those that have changed or have been deleted. But to be able to do so, all EA and their associated values must reside in the archive table of content (the catalogue), which is by design stored in memory. As EA can grow up to 64 KB by file, this can lead to a quick saturation of the virtual memory, which is already enough solicited by other information from catalogue.

    These two schemes implies a different pattern for saving EA in archive. In the first case (no EA in memory except at time of operation on it), to avoid skipping in the archive (and ask the user to change of disks too often, or just put pressure on the cache and disk I/O), EA must be stored beside the data of the file (if present). Thus they must be distributed all along the archive (except at the end that only contains the catalogue).

    In the second case (EA are loaded in memory for comparison), EA must reside beside or within the catalogue, in any case at the end of the archive, not to have to user to need all disks to just take an archive as reference.

    As the catalogue grows already fast with the number of file to save (from a few bytes for hard_link to 400 bytes around per directory inode), the memory saving option has been adopted.

    Thus, EA changes are based on the ctime change. Unfortunately, no system call permits to restore ctime. Thus, restoring a differential backup after its reference has been restored, will present restored inode as more recent than those in the differential archive, thus the -r option would prevent any EA restoration. In consequence, -r has been disabled for EA, it does only concern data contents. If you don't want to restore any EA but just more recent data, you can use the following : -r -u "*"

    Archive structure in brief

    The Slice Level

    A slice is composed of a header, data and trailer (the trailer appeared with archive format version 8)

    +--------+-------------------------------------------+-------+ | header | Data |Trailer| | | | | +--------+-------------------------------------------+-------+

    the slice header is composed of:

    • a magic number that tells this is a dar slice
    • a internal_name which is unique to a given archive and shared by all slices
    • a flag that tells whether the slice is the last of the archive or whether a trailer is present that contains this info.
    • a extension flag, that was used in older archive but which now always set to 'T' telling that a TLV list follows
    • A TLV (Type Length Value) list of item, it contains:
      • the slice size
      • first slice size
      • the data_name

    The TLV list will receive any future new field related to slice header.

    +-------+----------+------+-----------+-------+ | Magic | internal | flag | extension | TLV | | Num. | name | byte | byte | list | +-------+----------+------+-----------+-------+

    The header is the first thing to be written, and if the current slice is not the last slice (all data to write could not fit in it), before format 8, the flag field was changed indicating that another slice follows. Since archive format 8, the flag is set to a specific value indicating that the information telling whether the slice is the last or not is placed in a slice trailer, a new "structure" that appeared with that format and which is located at the end of each slice.

    The header is also the first part to be read.

    A TLV list is of course a list of TLV:

    +-------+----------+------+-----------+- ...-----+-------+ | Number| TLV 1 | TLV 2| TLV 3 | | TLV n | | of TLV| | | | | | +-------+----------+------+-----------+--...-----+-------+

    Each TLV item is, as commonly, defined as set of three fields:

    +---------+-------------------------+-------------------------+ | Type | Length | Value | |(2 bytes)| (arbitrary large value) | (arbitrary large data) | +---------+-------------------------+-------------------------+

    The 2 bytes type is large enough for today's need (65535 different types while only three used), however TLV 65535 is reserved for future use and will signal a new format for the type field.

    To know in which slice and at which position to find a particular data, dar needs to know each file's size. This is the reason why each slice contains the slice size information, in particular the last slice. In older version, dar had to read the first slice first to get this slicing information. Then it could read the archive contents at the end of the last slice. Today, reading the last slice, dar can fetch the slicing scheme from the slice header (what we just detailed) and fetch the archive contents at the end of this same last slice.

    The trailer (which is one byte length) is new since archive format version 8 (released 2.4.0). It contains the value that was located in the header flag field in older archive format, telling whether the slice is the last of the archive or not. When writting down a single sliced archive (no -s option provided), both the header and the trailer tell that the slice is the last of the archive (duplicated information). However, when doing multi-sliced archive, it is not possible to known whether a slice is the last before reaching the requested amount of data per slice (which depends on the amount of byte to save, compression ratio, encryption overhead, etc.). Thus the header flag contains a value telling that to know whether the slice is the last or not, one must read the trailer.

    In older format, it was necessary to seek back to update the header with the correct information when a new slice had to be created. But, keeping this behavior, it would not have been possible to make a digest "on the fly" (see --hash option). The addition of the trailer was required for that feature: to compute a md5 or sha1, ... hash for each slice. But, this costs one byte per slice, yes.

    Data Name

    As seen above in the header fields, we have among others the three following identifiers:

    • magic number
    • internal name
    • data name

    As already said, magic number is constant and let dar be (almost) sure a given file is a dar slice file, this is also based in particular on that field that the common unix 'file' command identifies an dar archive. Also briefly explained, the internal_name is a identifier that let dar be almost sure that several slices are from the same archive (problem car arise if two archives of same basename have their slices mixed together: dar will see that and report it to the user).

    The new and not yet described field is the "data_name". The data_name field is also present in the archive catalogue (the table of content) of each archive. It may be the same value as the one in the slice headers (normal archives) or another value if the archive results from a catalogue isolation process.

    Why this field? A new feature with release 2.4.0 is the ability to use an extracted catalogue to backup a internal catalogue of a given archive. Comparing the data_name value of the catalogue resulting from the isolation operation to the data_name value present int the slices of an archive to rescue, dar can be (almost) sure that the extracted catalogue matches the data present in the archive the user is trying to use it with.

    In brief:

    Fields Normal Archive Resliced Using dar_xform Resulting From Isolation isolated archive resliced with dar_xform
    Internal_name (slice header) A B C D
    data_name (slice header) A A C C
    data_name (archive catalogue) A A A A

    Archive Level

    The archive level describes the structure of the slice's data field (removing header and trailer of each slice), when they are all sticked together from slice to slice:

    +---------+----------------------------+-----------+--------+---------+--------+ | version | Data | catalogue | term 1 | version | term 2 | | header | | | | trailer | | +---------+----------------------------+-----------+--------+---------+--------+

    version header

    The version header is an almost diplication of version trailer. It is used when reading an archive in sequential mode, to be able to prepare the proper compression layer, and known whether escape sequence mark are present in the archive. It is present by default but absent when removing tape marks (-at option).

    version trailer

    the version trailer (which may still be called "version header" in some part of the documentation because it was originally located at the beginning of the archive in previous archive format) is composed of:

    • edition version of the archive
    • compression algorithm used
    • command line used for creating the archive, now known as "user comment"
    • flag, telling:
      • whether the archive is encrypted,
      • whether it has escape sequence marks,
      • whether the header/trailer contains an encrypted key
      • whether the header/trailer contains the initial offset field
      • whether the archive is signed
      • whether the header/trailer contains the slice layout of the archive of reference
      • whether the archive has salt+iteration count+hash algo for key derivation fonction (KDF)
    • initial offset (telling where starts the data in the archive, is only present in the trailer)
    • crypto algorithm used (present only if flag tells that the archive is encrypted)
    • size of the crypted key that follows (present only if the flag tells an encrypted key is present)
    • encrypted key (encrypted by mean of GPG asymetric algorithm, present only if flag says so)
    • the slice layout of the backup of reference if the current archive is an isolated catalogue
    • eventually, salt, iteration_count and hash algo for key derivation fonction (KDF) (used since version 2.6.0 when strong encryption is set)
    • CRC (Cyclic Redundancy Check) computed on the whole version header or trailer
    +---------+------+---------------+------+--------+-------+----------+---------------+------------+------+------+-----------+-----+-------+ | edition | algo | command line | flag | initial| crypto| crypted | gnupg crypted | reference | salt | salt | iteration |hash | CRC | | | | | | offset | algo | key size | sym. key | slicing | size | (KDF)| count(KDF)|(KDF)| | +---------+------+---------------+------+--------+-------+----------+---------------+------------+------+------+-----------+-----+-------+

    The trailer is used when reading an archive in direct access mode, to build the proper compression layer, escape layer (it is needed if mark have been inserted in the archive to un-escape data that could else be taken as an escape sequence mark) and encryption layer.

    Slice Layout

    slice_layout is a class holding information about the first slice size, other file size as well as size of slice headers. It is used in sar class to store the slicing information read from the class header which for historical reason does not use the slice_layout class but read/write those fields as independent ones. Class sar thus relies on class header to read or write this information to/from an archive.

    This class is also used in class header_version which is used to read/write the archive header/trailer at the beginning and at the end of the backup. The use of the class slice_layout in that structure is only present for isolated catalogues and contains the slicing information of the archive of reference. The class header_version relies on the slice_layout::read() and slice_layout::write() methods to read and write those fields to/from an archive's trailer/header. Here, this information is needed for the -Tslice option to work when applied to an isolated catalogue.

    The isolation process: At the end of archive creation, either read from filesystem (normal isolation) or just written to filesystem (on-fly isolation), the field archive::i_archive::slices holds the slice layout of the current archive. It has been read for the sar layer if the archive is sliced or is set to a default value if there is no sar layer and thus no slicing. The isolation process is defined in archive::i_archive::op_isolate(): it transmits the slices information to macro_tools_create_layers() which, in particular, uses it to setup the header_version. Once the layers are built by this routine, this header_version structure is written at the beginning of the archive (version header). After the catalogue has been written to these layers, op_isolate() calls macro_tools_close_layers() giving it this same generated header_version (containing the slice_layout of the current archive, which will be the reference for the isolated catalogue under construction), for it be written at the end of the archive (this is then the "version trailer").

    The data

    The data is a suite of file contents, with EA and FSA if present. When tape mark is used, a copy of the CRC is placed after's file Data and file's EA, to be used when reading the archive in sequential mode. This CRC is also dropped into the catalogue which takes place at the end of the archive to be used when reading the archive in direct access mode (the default). Last when delta binary is used, a file signature may follow the file's data:

     ....--+---------------------+-------+----+----+------------+------------+----+-----------+-----+-....   | file1 data | delta | EA | FSA| file2 data | file3 data | EA | file4 | FSA |   | (may be compressed) | sig | | |(no EA/FSA) | | | delta sig | |  ....--+---------------------+-------+----+----+------------+------------+----+-----------+-----+-....

    In the previous archive example, we find:

    • for file1: his data, a delta signature of this data, Extended Attributes, and File Specific Attributes
    • for file2: only his data, he has no delta signature, no EA no FSA
    • for file3: data and EA
    • for file4: no data, only a delta signature and FSA

    file1 shows all fields that can be associated with an inode, but none is mandatory, though if present they always follow this order:

    • Data or delta patch
    • followed by data/delta patch CRC when tape marks are set
    • Delta signature
    • followed by delta signature CRC when tape marks are set
    • Extended Attributes
    • followed by EA CRC when tape maks are set
    • File Specific Attributes
    • followed by FSA CRC when tape marks are set

    when tape marks are not set, the CRC listed above are not present inline in the archive, though they are still stored in the catalogue at the end of the archive (see below)

    More precisely about delta signature combined with tape marks, there is additional fields present than just the delta sig and its CRC:

    +------+------+------+---------------+----------+--------+ | base | sig | sig | sig data | data CRC | result | | CRC | size | block| (if size > 0) | if | CRC | | (*) | | len | | size > 0 | | +------+------+------+---------------+----------+--------+
    • base CRC is the CRC of the file that delta signature has been based on, used at restoration time before applying a patch
      (*)
      Since format 11.2 (release 2.7.9) this field has been moved with the inode information in the catalogue as well as inlined along the backup following seqt_file (tape) mark. This CRC is only present if the data for a file contains a binary patch.
    • sig size gives the size of the next two fields (sig block len exists only since archive format 10,1). This may be zero if the file does not has signature associated but the file's data is a delta patch and thus the base and the result CRCs are needed. When sig size is zero, neither "sig block len" nor "sig data" fields are present.
    • sig block len since format 10,1 the block size used for signature calculation is variable (it was fixed to 2048 bytes before) depending on the file size under consideration.
    • sig data is the delta signature data. This field and the following are the only one present outside the ending catalogue about delta signatures when tape marks are not set.
    • data CRC is the CRC on the previous "sig block len" + "sig data" fields.
    • result CRC is the CRC of the resulting file once patched (to check the patching was successful)

    The catalogue

    the catalogue, contains all inode, directory structure and hard_links information as well as data and EA CRC. The directory structure is stored in a simple way: the inode of a directory comes, then the inode of the files it contains, then a special entry named "EOD" for End of Directory. Considering the following tree:

     - toto   | titi   | tutu   | tata   | | blup   | +--   | boum   | coucou   +---

    it would generate the following sequence for catalogue storage:

    +-------+------+------+------+------+-----+------+--------+-----+ | toto | titi | tutu | tata | blup | EOD | boum | coucou | EOD | | | | | | | | | | | +-------+------+------+------+------+-----+------+--------+-----+

    the EOD entries take on byte. Doing this way, there is no need to store the full path of each file, just the filename is recorded. The file order and the EOD can be used to find the relative path of each entry.

    To be complete, the previous sequence is preceeded by:

    • the data_name as describe above,
    • the in-place path which is the root path (-R option) used at backup time that can be used at restoration time by mean of -ap option in place of any -R option.

    This sequence is then followed by a CRC calculated on the whole catalogue dumped data:

    +------------+----------+------------------------------------+-----+ | data_name | in_place |<--------catalogue content--------->| CRC | | | | toto|titi|tata|blup|EOD|boum|... | | +------------+----------+------------------------------------+-----+

    The terminator

    the terminator stores the position of the beginning of the catalogue, it is the last thing to be written. Thus dar first reads the terminator, then the catalogue. Well, there is now two terminators, both are meant to be read backward. The second terminator points to the beginning of the "trailer version" which is read first in direct access mode. The first terminator points to the start of the catalogue, which is read once the adhoc compression and encryption layers has been built based on the information found on the "trailer version"

    All Together

    Here is an example of how data can be structured in a four sliced archive:

    +--------+--------+------------------------+--+ | slice | version| file data + EA |Tr| | header | header | | | +--------+--------+------------------------+--+

    the first slice (just above) has been defined smaller using the -S option

    +--------+-----------------------------------------------------------------+--+ | slice | file data + EA |Tr| | header | | | +--------+-----------------------------------------------------------------+--+ +--------+-----------------------------------------------------------------+--+ | slice | file data + EA |Tr| | header | | | +--------+-----------------------------------------------------------------+--+ +--------+---------------------+-----------+------ +---------+--------+--+ | slice | file data + EA | catalogue | term 1| version | term 2 |Tr| | header | | | | trailer | | | +--------+---------------------+-----------+-------+---------+--------+--+

    the last slice is smaller because there was not enough data to make it full.

    Other Levels

    Things get a bit more complicated if we consider compression and encryption. The way the problem is addressed in dar's code is a bit like networks are designed in computer science, using the notion of layers.

    Here, there is a additional constraint, a given layer may or may not be present (encryption, compression, slicing for example). So all layer must have the same interface for serving the layer above them.

    This interface is defined by the pure virtual class name generic_file, which provides generic methods for reading, writing, skipping, getting the current offset when writing or reading data to any layer. Here follows some example of implementation:

    • For example the compressor class acts as a file which compresses data wrote to it and writes compressed data to another generic_file below it.

    • The strong encryption and scramble classes act the same but in place of compressing/uncompressing they encrypt/decrypt the data to/from another generic_file object "below" them.

    • The sar class which segment and reassmbles (perfoming thus the slicing) follows the same principle: it transfers data wrote to it to several fichier [mening file in French] objects.

    • Class fichier also inherit from generic_file class, and is just a wrapper for the plain file system calls.

    • Some new classes have been added with format 8, in particular the escape class, which inserts escape sequence mark at requested position, and modifies data wrote for it never looks like an escape sequence mark. To reduce the level of context switch when reading the catalogue (which makes a ton of small read)

    • a cache class is also present, it gather small writes made to it into larger writes, and pre-reads a large amount of data to answer to the many small reads when building the catalogue in memory from the archive.

    • fichier_libcurl is acts like the class fichier but as it relies on libcurl it let libdar read and write files to remote repository (using SFTP or FTP as of today).

    • class generic_rsync computes the delta signature, or patch a file using librsync

    • class hash_fichier computes a hash (md5, sha1, sh512,...) of the data it is written to before transmitting it unchanged to the layer below it

    • class memory_file is used to store data in memory. secu_memory_file does the same but in secured memory (relying on libgcrypt. It is used to store sensible data like keys and passwords

    • class null_file acts as the well known /dev/null block device of Unix system (drops all what is written to it and reads not data, like an empty file)

    • class spares_file replaces long sequence of zeroed bytes by a special and small structure (a hole) the contains the number of replaced zeroed bytes when you write data to it. The resulting is transmitted to the underlying layer. At reading time, depending on its mode of operation, it fetches data from the underlying data and either reconstructs the hole on the filesystem (skipping the write cursor after the hole) or replaces the read hole structure by the zeroed bytes it was replacing.

    Here are below is an example of possible layer stacking libdar is using to address a particular combinaison of commands and options:

      +----+--+----+-................+---------+  archive |file|EA|file| |catalogue|  layout |data| |data| | |   +----+--+----+-................+---------+   | | | | |   +-----+ | +-------+ | |  sparse |spars| | |sparse | | |  file |file | | |file | | |  detection |detec| | |detect.| | |  layer +-----+ | +-------+ | |  (optional) | | | | |   | | | | |   +-----+ | | +-----+ |   |delta| | | |delta| |   |sig | | | |sig | |   +-----+ | | +-----+ |   | | | | |   | | | | |   V V V V V   +-----------------------------------------+  compression | (compressed) data |   +-----------------------------------------+   | |   | |   V V   +-----------------------------------------+  escape layer | escaped data / escape sequences |  (optional) +-----------------------------------------+   | | / First Terminateur   | | |   | | V  elastic +---+ | | +----+---+  buffers |EEE| | | | T1 |EEE|   +---+ | | +----+---+   | | | | Second   V V V V Terminator   +---------------------------------------------------------+ |  cipher | (encrypted) data / cache if no encryption | |   +---------------------------------------------------------+ V   | | +---------+----+  +-------+ | | | trailer | T2 |  | header| | | +---------+----+  +-------+ | | | |   | | | | |   V V V V v  +----------------------------------------------------------------------------------+  | data |  +----------------------------------------------------------------------------------+   | | | | | | | | | | | |  +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +----+  |hash_file| |hash_file| |hash_file| |hash_file| |hash_file| |hash_file| |hash|  +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +----+   | (hash_file are optional) | | | | | | |  slice | | | | | | | | | | | |  headers | | | | | | | | | | | |   | | | | | | | | | | | | | |   | +---|------\ | | | | | | | | | | |   | | | | | | | | | | | | | |   V V V V V V V V V V V V V V  +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +----+  |HH| data | |HH| data | |HH| data | |HH| data | |HH| data | |HH| data | |HH| |  +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +----+   slice 1 slice 2 slice 3 slice 4 slice 5 slice 6 slice 7

    The elastic buffers are here to prevent plain text attack, where one knows which data is expected at a given place, an trying to guess the cipher comparing the expected data and the encrypted one. As dar generates structured archives, there would have some possibility that one use this attack to crack an archive encryption. To overcome this problem, elastic buffers have been added at the beginning and at the end of encrypted data. This way it is not possible to know where is located a given archive structure within the encrypted data. The elastic buffers are random data that contain at a random place a pattern that tells the overall size of the buffer (which size is randomly chosen during archive creation). The pattern is of the form ">###<" where the hash field (###) contains the elastic buffer size in binary. Small elastic buffer can be "><" for two bytes or "X" for one byte, but as it is encrypted beside archive data, it is not possible to determine its size for one that does not hold the archive encryption key. Elastic buffer are usually several kilobyte long. Here follows an example of elastic buffer:

    972037219>20<8172839

    For clarity, the size field between '>' and '<' has been written in decimal instead of binary, as well as the random data inside the elastic buffer. The location of the size field '>20<' is also randomly chosen at creation time.

    Teminateur is short structure that is intended to be read backward. It gives the absolute position of a given item within the archive: The second terminateur let dar skip at the beginning of the archive trailer. The first terminateur (eventually encrypted) let dar skip at the beginning of the catalogue).

    Overflow in arithmetic integer operations

    Some code explanation about the detection of integer arithmetic operation overflows. We speak about *unsigned* integers, and we have only portable standard ways to detect overflows when using 32 bits or 64 bits integer in place of infinint.

    Written in binary notation, a number is a finite suite of digits (0 or 1). To obtain the original number from its binary representation, we must multiply each digit by successive powers of two. Example the binary representation "101101" designs the number N where: N = 2^5 + 2^3 + 2^2 + 2^0

    In that context we will say that 5 is the maximum power of N (the power of the higher non null binary digit).

    For the addition operation ("+"), if an overflow occurs, the result is less than one or both operands, so overflow is not difficult to detect. To convince you, let's assume that the result is greater than both operands while it has overflowed. Thus the real result (without overflow) less the first operands should gives the second argument, but here we get a value that is greater than the all 1 bits integer (because there was an overflow and the resulting overflowed value is greater than the second and the first operand), so this is absurd, and in case of overflow the resulting value is less than one of the operands.

    For substraction operation ("-"), if the second operand is greater than the first there will be an overflow (result must be unsigned thus positive) else there will not be any overflow. Thus detection is even more simple.

    for division ("/") and modulo ("%") operations, there is never an overflow (there is just the illicit division by zero).

    for multiplication operation ("*"), a heuristic has been chosen to quickly detect overflow, the drawback is that it may triggers false overflow when number get near the maximum possible integer value. Here is the heuristic used:

    given A and B two integers, which max powers are m and n respectively, we have:

    A < 2^(m+1)

    and

    B < 2^(n+1)

    thus we also have:

    A.B < 2^(m+1).2^(n+1)

    which is:

    A.B < 2^(m+n+2)

    In consequences we know that the maximum power of the product of A by B is at most m+n+1 and while m+n+1 is less than or equal to the maximum power of the integer field there will not be overflow else we consider there will be an overflow even if it may not be always the case (this is an heuristic algorithm).

    libdar and thread-safe requirement

    The following should only concern those who plan to use libdar in their own programs.

    If expect to only have one thread using libdar, there is no problem, of course, you will however have to call one of the get_version() first, as usual. Thing change if you intend to have several concurrent threads using libdar library in a same process.

    libdar is thread-safe but you need to respect certain conditions:

    Thread-save support must have been activated at libdar compilation time. Several 'configure' options have an impact this thread-safe support:

    --enable-test-memory
    is a debug option that avoid libdar to be thread-safe, so don't use it unless debugging non thread relative behaviors.
    --disable-thread-safe
    this disable thread safe support in libdar, it may be used for debugging purposes or to marginally speed up the libdar execution

    If you want to rely in multi-thread support in libdar, a good practice is to check that a call to libdar::compile_time::thread_safe() returns true. If not your program should behave accordingly either abort with an error or have all libdar interaction done in a single thread at a time.

    IMPORTANT
    It is vital to to call one of the get_version(...) as first call to libdar and wait for its return before assuming multi-thread support is ready for use.

    For more information about libdar and its API, check the doc/api_tutorial.html document and the API reference manual under doc/html/index.html

    Native Language Support

    Native Language Support (NLS) is the ability a given program has to display its messages in different languages. For dar, this is implemented using the gettext tools. This tool must be installed on the system for dar can be able to display messages in another language than English (some would joke telling that this is more Frenglish than good old English, they might be right, so feel free to report any too Frenchy syntax, spelling or grammar)

    The dar behavior is the following:

    • On a system without gettext dar will not use gettext at all. All messages will be in English
    • On a system with gettext dar will use the system's gettext, unless you use --disable-nls option with the configure script.

    If NLS is available you just have to set the LANG environment variable to your locale settings to change the language in which dar displays its messages (see ABOUT-NLS for more about the LANG variable).

    Just for information, gettext() is the name of the call that makes translations of string in the program. This call is implemented in the library called 'libintl' (intl for Internationalization).

    Refer to the ABOUT-NLS file at the root of the source package to learn more about the way to display dar's messages in your own language. Note that not all languages are yet supported, this is up to you to send me a translation in your language and/or contact a translating team as explained in ABOUT-NLS.

    To know which languages are supported by dar, read the po/LINGUAS file and check out for the presence of the corresponding *.po files in this directory.

    Dar Release Process

    General view

    Dar does not follow the so called modern continuous development also known as agile. First dar is not a devops code while it can be used for devops. dar is rather a (free) product outside any particuar customer specific context. Second keeping the good old development process separating development from quality engineering in different phases over time brings the expected robustness a backup tool must have.

    This separation takes the form of branches of a tree where the trunk is the development code, which grows when receiving new features, and the branches carry the so called releases. Several releases can be found on a given branch but branches only receive bug fixes between two releases.

    Several branches may be active at a time, leading to several concurrent releases (for examples releases 2.5.22 and 2.6.5 have the same date).

    Last all bug fixes from an older and active branch are merged into the more recent branches up to the trunk, which is the development code. But the modification of code in branches is kept as minimal as possible.

    Phasing

    Development Phase:

    During that phase, dar receives new features. At this stage sources are modified and tested unitary after each feature addition. These modifications and additions take place on the trunk.

    Frozen API Phase:

    At this stage, no new feature that would change the API are added. The API shall be documented enough to let API users give their feedback about the design and its implementation.

    During this time, development may continue, whatever is necessary while it does not changes the API, like documentation of the whole project, problem fixes in libdar, new features in command-line part of the source, and so on. A set of non regression test is also updated and run to check that new features works as expected with old ones and old ones still work fine toghether.

    Pre-release Phase:

    This phase is announced on dar-news mailing-list which you can subscribe to also be informed about new releases, security issues, and other major problems.

    The goal of this phrase is to let anyone test the release candidate in their own environment and report any code building problem and any bug met to the pre-release mailing-list.

    Usually this code is on the trunk (the master in GIT) but pre-built packages are also provided daily for testing.

    Release Phase:

    When the pre-release ends, the first official release is provided (its last number version is zero like for release 2.7.0). It is available broadly by mean of sourceforge mirrors.

    A new branch (like branch_2.7.x) is created to hold the released code, and will receive any further releases by mean of bug fixes (releases 2.7.1, releases 2.7.2,...).

    As for any new release, an email is sent with the list of Changes to the dar-news mailing-list. During that phase, users are welcome to report bugs/problem either first asking for support on the dar-support mailing-list or openning a bug report.

    Interim Releases:

    This type of release is fast to setup and will later become an official release. Doing so saves time to update the website, time to generate the windows binaries, time to push data to the sourceforge mirror update, and so on. Interim releases are provided for validating bug fixes, and are setup from the latest code of a branch. They are usually named after the future release they will produce suffixed by RC# (for Release Candidate). For example, we got version 2.6.6.RC2 in the past. Note that interim releases are deleted once the official release is done (release 2.6.6 in our example).

    Dar's Versions

    package release version

    Dar packages are release during the pre-release phase (see above). Each version is identified by three number separated by dot like for example, version 2.3.0:

    • The last number is incremented between releases that take place in the same release phase (keeping trace of bug fixes),
    • the middle number increments at each pre-release phase and designate a branch.
    • the first number is incremented when a major change in the software structure takes place, like for version 2.0.0 which has seen the monolithic dar software being split in two: the libdar and its API on one side and on the other side dar reduced to translating between command-line and the API

    Note that release versionning is completely different from what is done for the Linux kernel, here for dar all versionnized packages are stable released software and thus stability increases with the last number of the version.

    Libdar version

    Unfortunately, the release version does not give much information about the compatibility of different libdar version, from the point of view of an external application, that thus has not been released with libdar and may be faced to different libdar versions. So, libdar has its own version. It is also a three number version, (for example, libdar version 6.2.7), but each number has a different meaning:

    • The last number increases with a new version that only fixes bugs (same meaning as package versioning)
    • the middle number increases with when new features has been added but older features can be used the same (a software that relied on the previous middle number should still work with a more recent one)
    • the first number changes when the API had to be modified in a way that the use of old feature has changed

    Other versions

    Beside the libdar library, you can find several command-line applications: dar, dar_xform, dar_slave, dar_manager, dar_cp and dar_split. These have their own version which is, here too, made of three numbers. Their meaning is the same as the meaning for the package release version: The last number increases upon bug fix, the middle upon new feature, the first upon major architecture changes.

    Archive format version

    When new features come, it is sometime necessary to change the structure of the archive. To be able to know the format used in the archive, a field is present in each archive that defines this format.

    Currently the archive format version is 11 meaning there is around 11 different version of dar archive format, plus the version increment that were needed to fix a bug (like version 8.1 and 10.1), but dar and libdar have been developped to be able to read them all (all those that existed when the version was released).

    Cross reference matrix

    OK, you may now find that this is a bit complex so a list of version is give below. Just remember that there are two points of view: The command-line user and the external application developer.

    Date release (and dar version) Archive format Database Format libdar version dar_xform dar_slave dar_manager dar_cp dar_split
    April 2nd, 2002 1.0.0 01 ----- ----- ----- ----- ----- ----- -----
    April 24th, 2002 1.0.1 01 ----- ----- ----- ----- ----- ----- -----
    May 8th, 2002 1.0.2 01 ----- ----- ----- ----- ----- ----- -----
    May 27th, 2002 1.0.3 01 ----- ----- ----- ----- ----- ----- -----
    June 26th, 2002 1.1.0 02 ----- ----- 1.0.0 1.0.0 ----- ----- -----
    Nov. 4th, 2002 1.2.0 03 01 ----- 1.1.0 1.1.0 1.0.0 ----- -----
    Jan. 10th, 2003 1.2.1 03 01 ----- 1.1.0 1.1.0 1.0.0 ----- -----
    May 19th, 2003 1.3.0 03 01 ----- 1.1.0 1.1.0 1.1.0 ----- -----
    Nov. 2nd, 2003 2.0.0 03 01 1.0.0 1.1.0 1.1.0 1.2.0 1.0.0 -----
    Nov. 21th, 2003 2.0.1 03 01 1.0.1 1.1.0 1.1.0 1.2.0 1.0.0 -----
    Dec. 7th, 2003 2.0.2 03 01 1.0.2 1.1.0 1.1.0 1.2.0 1.0.0 -----
    Dec. 14th, 2003 2.0.3 03 01 1.0.2 1.1.0 1.1.0 1.2.1 1.0.0 -----
    Jan. 3rd, 2004 2.0.4 03 01 1.0.2 1.1.0 1.1.0 1.2.1 1.0.0 -----
    Feb. 8th, 2004 2.1.0 03 01 2.0.0 1.2.0 1.2.0 1.2.1 1.0.0 -----
    March 5th, 2004 2.1.1 03 01 2.0.1 1.2.1 1.2.1 1.2.2 1.0.0 -----
    March 12th, 2004 2.1.2 03 01 2.0.2 1.2.1 1.2.1 1.2.2 1.0.0 -----
    May 6th, 2004 2.1.3 03 01 2.0.3 1.2.1 1.2.1 1.2.2 1.0.1 -----
    July 13th, 2004 2.1.4 03 01 2.0.4 1.2.1 1.2.1 1.2.2 1.0.1 -----
    Sept. 12th, 2004 2.1.5 03 01 2.0.5 1.2.1 1.2.1 1.2.2 1.0.1 -----
    Jan. 29th, 2005 2.1.6 03 01 2.0.5 1.2.1 1.2.1 1.2.2 1.0.1 -----
    Jan. 30th, 2005 2.2.0 04 01 3.0.0 1.3.0 1.3.0 1.3.0 1.0.1 -----
    Feb. 20th, 2005 2.2.1 04 01 3.0.1 1.3.1 1.3.1 1.3.1 1.0.1 -----
    May 12th, 2005 2.2.2 04 01 3.0.2 1.3.1 1.3.1 1.3.1 1.0.2 -----
    Sept. 13th, 2005 2.2.3 04 01 3.1.0 1.3.1 1.3.1 1.3.1 1.0.2 -----
    Nov. 5th, 2005 2.2.4 04 01 3.1.1 1.3.1 1.3.1 1.3.1 1.0.2 -----
    Dec. 6th, 2005 2.2.5 04 01 3.1.2 1.3.1 1.3.1 1.3.1 1.0.2 -----
    Jan. 19th, 2006 2.2.6 04 01 3.1.3 1.3.1 1.3.1 1.3.1 1.0.3 -----
    Feb. 24th, 2006 2.2.7 04 01 3.1.4 1.3.1 1.3.1 1.3.1 1.0.3 -----
    Feb. 24th, 2006 2.3.0 05 01 4.0.0 1.4.0 1.3.2 1.4.0 1.1.0 -----
    June 26th, 2006 2.3.1 05 01 4.0.1 1.4.0 1.3.2 1.4.0 1.1.0 -----
    Oct. 30th, 2006 2.3.2 05 01 4.0.2 1.4.0 1.3.2 1.4.0 1.1.0 -----
    Feb. 24th, 2007 2.3.3 05 01 4.1.0 1.4.0 1.3.2 1.4.1 1.2.0 -----
    June 30th, 2007 2.3.4 06 01 4.3.0 1.4.0 1.3.2 1.4.1 1.2.0 -----
    Aug. 28th, 2007 2.3.5 06 01 4.4.0 1.4.1 1.3.3 1.4.2 1.2.1 -----
    Sept. 29th, 2007 2.3.6 06 01 4.4.1 1.4.1 1.3.3 1.4.2 1.2.1 -----
    Feb. 10th, 2008 2.3.7 06 01 4.4.2 1.4.2 1.3.4 1.4.3 1.2.2 -----
    June 20th, 2008 2.3.8 07 01 4.4.3 1.4.2 1.3.4 1.4.3 1.2.2 -----
    May 22nd, 2009 2.3.9 07 01 4.4.4 1.4.2 1.3.4 1.4.3 1.2.2 -----
    April 9th, 2010 2.3.10 07 01 4.4.5 1.4.2 1.3.4 1.4.3 1.2.2 -----
    March 13th, 2011 2.3.11 07 01 4.5.0 1.4.3 1.3.4 1.4.3 1.2.2 -----
    February 25th, 2012 2.3.12 07 01 4.5.1 1.4.3 1.3.4 1.4.3 1.2.2 -----
    June 2nd, 2011 2.4.0 08 02 5.0.0 1.5.0 1.4.0 1.5.0 1.2.3 -----
    July 21st, 2011 2.4.1 08 02 5.1.0 1.5.0 1.4.0 1.6.0 1.2.3 -----
    Sept. 5th, 2011 2.4.2 08 02 5.1.1 1.5.0 1.4.0 1.6.0 1.2.3 -----
    February 25th, 2012 2.4.3 08 03 5.2.0 1.5.0 1.4.0 1.7.0 1.2.3 -----
    March 17th, 2012 2.4.4 08 03 5.2.1 1.5.0 1.4.0 1.7.1 1.2.3 -----
    April 15th, 2012 2.4.5 08 03 5.2.2 1.5.1 1.4.1 1.7.2 1.2.4 -----
    June 24th, 2012 2.4.6 08 03 5.2.3 1.5.2 1.4.2 1.7.3 1.2.5 -----
    July 5th, 2012 2.4.7 08 03 5.2.4 1.5.2 1.4.3 1.7.3 1.2.5 -----
    September 9th, 2012 2.4.8 08 03 5.3.0 1.5.3 1.4.4 1.7.4 1.2.6 -----
    January 6th, 2013 2.4.9 08 03 5.3.1 1.5.3 1.4.4 1.7.4 1.2.7 -----
    March 9th, 2013 2.4.10 08 03 5.3.2 1.5.3 1.4.4 1.7.4 1.2.7 -----
    Aug. 26th, 2013 2.4.11 08 03 5.4.0 1.5.4 1.4.5 1.7.5 1.2.8 -----
    January 19th, 2014 2.4.12 08 03 5.5.0 1.5.4 1.4.5 1.7.6 1.2.8 -----
    April 21st, 2014 2.4.13 08 03 5.6.0 1.5.5 1.4.5 1.7.7 1.2.8 -----
    June 15th, 2014 2.4.14 08 03 5.6.1 1.5.5 1.4.5 1.7.7 1.2.8 -----
    September 6th, 2014 2.4.15 08 03 5.6.2 1.5.6 1.4.6 1.7.8 1.2.8 -----
    January 18th, 2015 2.4.16 08 03 5.6.3 1.5.6 1.4.6 1.7.8 1.2.8 -----
    January 31th, 2015 2.4.17 08 03 5.6.4 1.5.6 1.4.6 1.7.8 1.2.8 -----
    August 30th, 2015 2.4.18 08.1 03 5.6.5 1.5.6 1.4.6 1.7.8 1.2.8 -----
    October 4th, 2015 2.4.19 08.1 03 5.6.6 1.5.6 1.4.6 1.7.8 1.2.8 -----
    November 21th, 2015 2.4.20 08.1 03 5.6.7 1.5.8 1.4.8 1.7.10 1.2.10 -----
    April 24th, 2016 2.4.21 08.1 03 5.6.8 1.5.9 1.4.9 1.7.11 1.2.10 -----
    June 5th, 2016 2.4.22 08.1 03 5.6.9 1.5.9 1.4.9 1.7.11 1.2.10 -----
    October 29th, 2016 2.4.23 08.1 03 5.6.9 1.5.9 1.4.9 1.7.11 1.2.10 -----
    January 21st, 2017 2.4.24 08.1 03 5.6.10 1.5.9 1.4.9 1.7.11 1.2.10 -----
    October 4th, 2015 2.5.0 09 04 5.7.0 1.5.7 1.4.7 1.7.9 1.2.9 1.0.0
    October 17th, 2015 2.5.1 09 04 5.7.1 1.5.8 1.4.8 1.7.10 1.2.10 1.0.0
    November 21st, 2015 2.5.2 09 04 5.7.2 1.5.8 1.4.8 1.7.10 1.2.10 1.0.0
    January 4th, 2016 2.5.3 09 04 5.7.3 1.5.8 1.4.8 1.7.10 1.2.10 1.0.0
    April 24th, 2016 2.5.4 09 04 5.8.0 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    June 5th, 2016 2.5.5 09 04 5.8.1 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    September 10th, 2016 2.5.6 09 04 5.8.2 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    October 29th, 2016 2.5.7 09 04 5.8.3 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    January 2nd, 2017 2.5.8 09 04 5.8.4 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    January 21st, 2017 2.5.9 09 04 5.9.0 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    April 4th, 2017 2.5.10 09 04 5.10.0 1.5.9 1.4.9 1.7.11 1.2.10 1.0.0
    June 23rd, 2017 2.5.11 09 04 5.11.0 1.5.9 1.4.9 1.7.12 1.2.10 1.0.0
    September 2nd,2017 2.5.12 09 04 5.11.1 1.5.9 1.4.9 1.7.12 1.2.10 1.0.0
    October 28th, 2017 2.5.13 09 04 5.12.0 1.5.10 1.4.10 1.7.13 1.2.10 1.0.0
    December 20th, 2017 2.5.14 09 04 5.12.1 1.5.10 1.4.10 1.7.13 1.2.10 1.1.1
    April 28th, 2018 2.5.15 09 04 5.12.2 1.5.10 1.4.10 1.7.13 1.2.10 1.1.1
    July 19th, 2018 2.5.16 09 04 5.12.3 1.5.10 1.4.10 1.7.13 1.2.10 1.1.1
    September 30th, 2018 2.5.17 09 04 5.13.0 1.5.10 1.4.10 1.7.13 1.2.10 1.1.1
    December 8th, 2018 2.5.18 09 04 5.13.1 1.5.10 1.4.10 1.7.13 1.2.10 1.1.1
    January 19th, 2019 2.5.19 09 04 5.13.2 1.5.11 1.4.11 1.7.14 1.2.10 1.1.1
    February 9th, 2019 2.5.20 09 04 5.13.3 1.5.11 1.4.11 1.7.14 1.2.10 1.1.1
    May 25th, 2019 2.5.21 09 04 5.13.4 1.5.11 1.4.11 1.7.14 1.2.10 1.1.1
    July 6th, 2019 2.5.22 09 04 5.13.5 1.5.11 1.4.11 1.7.14 1.2.10 1.1.1
    December 16th, 2018 2.6.0 10 05 6.0.0 1.6.0 1.5.0 1.8.0 1.2.11 1.1.2
    January 19th, 2019 2.6.1 10 05 6.0.1 1.6.0 1.5.0 1.8.0 1.2.11 1.1.2
    February 9th, 2019 2.6.2 10 05 6.0.2 1.6.1 1.5.1 1.8.1 1.2.12 1.1.2
    March 30th, 2019 2.6.3 10.1 05 6.1.0 1.6.1 1.5.1 1.8.1 1.2.12 1.1.2
    May 25th, 2019 2.6.4 10.1 05 6.1.1 1.6.1 1.5.1 1.8.1 1.2.12 1.1.2
    July 6th, 2019 2.6.5 10.1 05 6.1.2 1.6.1 1.5.1 1.8.1 1.2.12 1.1.2
    September 21st, 2019 2.6.6 10.1 05 6.2.0 1.6.2 1.5.2 1.8.2 1.2.12 1.1.2
    January 12th, 2020 2.6.7 10.1 05 6.2.1 1.6.2 1.5.2 1.8.2 1.2.12 1.1.2
    February 8th, 2020 2.6.8 10.1 05 6.2.2 1.6.2 1.5.2 1.8.2 1.2.12 1.1.2
    March 22nd, 2020 2.6.9 10.1 05 6.2.3 1.6.2 1.5.2 1.8.2 1.2.12 1.1.2
    May 31st, 2020 2.6.10 10.1 05 6.2.4 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    September 5th, 2020 2.6.11 10.1 05 6.2.5 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    September 11th, 2020 2.6.12 10.1 05 6.2.6 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    November 8th, 2020 2.6.13 10.1 05 6.2.7 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    March 14th, 2021 2.6.14 10.1 05 6.2.8 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    May 13th, 2021 2.6.15 10.1 05 6.2.9 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    January 2nd, 2022 2.6.16 10.1 05 6.2.10 1.6.3 1.5.3 1.8.3 1.2.13 1.1.3
    April 24th, 2021 2.7.0 11 06 6.3.0 1.7.0 1.6.0 1.9.0 1.2.13 1.2.0
    May 13th, 2021 2.7.1 11.1 06 6.4.0 1.7.0 1.6.0 1.9.0 1.2.13 1.2.0
    September 25th, 2021 2.7.2 11.1 06 6.4.1 1.7.0 1.6.0 1.9.0 1.2.13 1.2.0
    January 2nd, 2022 2.7.3 11.1 06 6.4.2 1.7.0 1.6.0 1.9.0 1.2.13 1.2.1
    March 13th, 2022 2.7.4 11.1 06 6.4.3 1.7.0 1.6.0 1.9.0 1.2.13 1.2.1
    April 13th, 2022 2.7.5 11.1 06 6.4.4 1.7.0 1.6.0 1.9.0 1.2.13 1.2.1
    June 19th, 2022 2.7.6 11.1 06 6.4.5 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    August 7th, 2022 2.7.7 11.1 06 6.4.6 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    November 29th, 2022 2.7.8 11.1 06 6.5.0 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    March 26th, 2023 2.7.9 11.2 06 6.5.1 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    June 25th, 2023 2.7.10 11.2 06 6.6.0 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    August 5th, 2023 2.7.11 11.2 06 6.6.1 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    September 3rd, 2023 2.7.12 11.3 06 6.7.0 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    October 1st, 2023 2.7.13 11.3 06 6.7.1 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    March 23rd, 2024 2.7.14 11.3 06 6.7.2 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    June 6th, 2024 2.7.15 11.3 06 6.7.3 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    December 9th, 2024 2.7.16 11.3 06 6.8.0 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2
    March 22nd, 2025 2.7.17 11.3 06 6.8.1 1.7.0 1.6.0 1.9.0 1.3.0 1.2.2

    Scrambling (weak encryption)

    Before strong encryption was implemented, dar had only a very simple and weak encryption mechanism. It remains available in current release under the "scram" algorithm name. It mains advantage is that is does not rely on any external library, it is completely part of libdar.

    How does it works?

    Consider the pass phrase as a string, thus a sequence of bytes, thus a sequence of integer each one between 0 and 255 (including 0 and 255). The data to "scramble" is also a sequence of byte, usually much longer than the pass phrase. The principle is to add byte by byte the pass phrase to the data, modulo 256. The pass phrase is repeated all along the archive. Let's take an example:

    the pass phrase is "he\220lo" (where \220 is the character which decimal value is 220). The data is "example"

    Taken from ASCII standard we have:

    • h = 104
    • l = 108
    • o = 111
    • e = 101
    • x = 120
    • a = 97
    • m = 109
    • p = 112
      e x a m p l e   101 120 97 109 112 108 101     + h e \220 l o h e   104 101 220 108 111 104 101     ---------------------------------------------------------------     205 221 317 217 223 212 202     ---------------------------------------------------------------   modulo   256 : 205 221 61 217 223 212 202   \205 \201 = \217 \223 \212 \202

    thus the data "example" will be written in the archive as \205\201=\217\223\212\202

    This method allows to decode any portion without knowing the rest of the data. It does not consume much resources to compute, but it is terribly weak and easy to crack. Of course, the data is more difficult to retrieve without the key when the key is long. Today dar can also use strong encryption (blowfish and few others) and thanks to a encryption block can still avoid reading the whole archive to restore any single file.

    Strong encryption internals

    Encryption per block

    To not completely break the possibility to directly access a file withing a dar backup, the archive is not encrypted as a whole (as would probably do an external program, like openssl). The encryption is done per large block of data. This way, each block can be decrypted independently from the others, and if you want to read some data somewhere you only need to decrypt the whole block(s) it is located in.

    Such encryption block size can range from 10 bytes to 4 GB (10 kiB is the default value). We will design these as libdar cipher block, to differentiate with what the underlying cipher algoritm uses as block.

    CBC and IV

    Inside each libdar cipher block, dar relies on the CBC mode (Cipher Block Chaining). In this mode, the data of a libdar cipher block is split in small plaintext blocks (a few tens of bytes as defined by the cipher algorithm). These plaintext blocks are not ciphered independently, but the ciphering result is melt with the next plaintext block before the ciphering operation. The advantage of doing so, is that a repeated data pattern does not lead to a repeated encryption pattern. However the first plaintext block of a libdar cipher block is not melt with anything. To cope with that an Initial Vector (IV) can be provided to 'randomize' the encryption result, though one has to record what was the IV at time of encryption to be able to decipher the data.

    The IV for each libdar cipher block is derived from its number inside the dar backup/archive and from the encryption key. The result is that even if you have the same data in two libdar cipher blocks, it will not result in the same encrypted data common to both (or to more libdar cipher blocks)

    Elastic buffer

    An "elastic buffer" is introduced at the beginning and at the end of the archive, to protect against plain text attack. The elastic buffer size randomly varies and is defined at execution time. It is composed of randomly generated data (using gcry_create_nonce()).

    But to be able at reading time to determine the amount of random garbage that has been put at the beginning and at the end of the archive by mean of such elastic buffers, a small structure is randomly dropped inside the garbage that tells the overall size of this random data:

    Two marks characters '>' and '<' delimit a size field, which indicate the byte size of the elastic buffer. The size field is randomly placed in the elastic buffer. Last, the buffer is encrypted with the rest of the data. Typical elastic buffer size range from 1 byte to 10 kB, for both initial and terminal elastic buffers.

    Elastic buffers may also be used inside libdar encryption blocks, when the libdar cipher block size is not a multiple of the the ciphering algorithm plaintext block size. Such elastic buffer is added for padding at the end of the libdar cipher block.

    Key Derivation Fonction - KDF

    Just above, we saw:

    • how CBC inside a libdar cipher block avoided repetitive data to be seen as repetitive encrypted data
    • how different IV could avoid exposing repetitive data between libdar cipher blocks to be seen as repetitive encrypted data
    • We also see how elastic buffer randomly moved data structure inside an archive to minimize code book attack.

    First, what if you use always the same password/passphrase between different archives? Breaking one archive will let the attacker access all archives or backups. Worse, an attacker has much more data to work on and compute statistical decryption methods to get to its goal.

    Second, directly using a human provided password would provide a weak encryption. Because in all the possible key values, humans choices lead to a very restricted set of values: only letters, digits, space and eventually punctuation characters, with a certain order at words level and phrase level, even 5ubst1tu1n9 (substituing) letters by digits does not bring much more randomness... Think that in the 256 different values a byte can have we barely use 50 of them. Thing that in the may letter combinaisons we have, we are far from using all of them (how many English words do you know that contain 'xz'? ...Or even more unreadable larger sequences?)

    To get one step further in securing your data, libdar makes use of a Key Derivation Function (KDF):

    Strong encryption uses secret key, to cipher/uncipher data, we will say "password" to simplify in the following.

    To overcome this human weakness, the state of the art is to use a Key Derivation Function, which takes in input the human provided password plus a "salt" and outputs a stronger key (salt is described below). This function is based on a hash algoritm (sha1 by default, but this value can be modified since archive format 10/release 2.6.0) and an iteration count (2000 before format 10 and 200000 interations since release 2.6.0 by default). The KDF mixes salt and passphrase then hash the result and applies the hashing algorithm on it repeatedly the number of iteration count that has been selected, the final result is the hashed password

    Note that as an alternative to PBKDF2 (from PKCS#5 v2) which is the name of the KDF algorithm we have just described, release 2.7.0 brings the modern argon2 KDF, which is the default, it has been built against libargon2 library. Argon2 also makes use of a salt and an interation could, so what was described above stays valid at a global view level.

    So the KDF transform the well know human namespace of secret keys in a new namespace for keys, not wider, but not that well known. Of course, anyone with the knownledge of the salt, iteration count and algorithm is able to know it, but it costs a lot of effort in computing resource! And this is the objective. No need to hide or make secret the KDF parameters, when an attacker that would requires hours or days to crack a password based on a dictionnary attack now needs years or centuries to do so.

    Choosing the salt

    The salt is a randomly chosen value by libdar and stored in clear in the archive header, beside the iteration count and hash algorithm used for the KDF. Thus even if the user has the same password for different archive, the effective real key used for strong encryption will differ from archive to archive or from backup to backup, making much more difficult for an attacker to crack an archive using statistical methods over a large number of archives sharing the same human provided password. And second point, a dictionnary attack is much more costly to succeed, either you need hundred thouthand more time of hundred thouthand more CPU power.

    In summary

    In summary, salt randomize keys between archive, elastic buffers randomize data location inside the archive, IV randomize encryption between libdar encryption blocks, and CBC mode randomize encryption withing a libdar encryption block.

    Here follows a diagram of the way key, block number, cipher algorithm and Initial Vector (IV) interact together:

      +------------------- [ algo in CBC mode ] -----------------------------> main key handle  algorithm -+ ^ |   +---> max key len | |   | | |   | | |   v | |  password ------> [ KDF ] ------> hashed_key -------------+ |   ^ | |   | | |  salt ----------------+ v |   ^ [ SHA1/SHA256 ] |   | | |  hash algo -----------+ | |   ^ v |   | essiv_password |  iteration count -----+ | |   | |   v |   [ Blowfish/AES256 in ECB mode ] |   | |   | |   v |  . . . . . . . . . . essiv key handle |  . Initialization . | |  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .|. . . . . . . . . . . . . . . | . . . . . . . . . .   v |  libdar cipher block's number -----------------------> [ encrypt ] ------> IV -----+ |   | |   | |   v v  libdar cipher block's data ---------------------------------------------------> [ encrypt/decrypt ] -----> data  

    Public key based encryption internals

    Dar does not encrypt the whole archive with a recipient's public key, but rather randomly chooses a (long) password for symmetric encryption (as seen above, except it does not need KDF and salt as it can select a much better random key than human can do), encrypts that password with the recipient's public keys (eventually signing it with your own private key) and drops a copy of this ciphered/signed data into the archive. At reading time, dar read the archive header to find the encrypted password, decrypt it using the user's private key then use that password, now in clear, to decrypt the rest of the archive with the adhoc symmetric algorithm.

    Signing is done on the randomly chosen ciphering key as seen above, but also on the catalogue (the table of content located at the end of the archive). More precisely the catalogue is sha512-hashed and the resulting hash is signed. The catalogue hash and its signature are stored encrypted in the archive right after the catalogue.

    Why not using only asymmetric encryption end to end?

    First, for a given fixed amount of data, the resulting ciphered data size may vary. Thus sticking ciphered block together in the archive would not lead to easily know where a block starts and where is located its end. Second, doing that way allows an archive to have several different recipients, the password is ciphered for each of them and the archive is readable by any specified recipient, while they do not share any key. Doing that way has very little impact of archive size.

    Why not using signing from end to end?

    First, this can be done without dar, just use gpg on the slices of an archive, so there is no advantage in having such feature inside dar. The other reason is time, as signing the whole archive would be long. it would also be very painful to validate a given archive authenticity (time to read a whole archive), here we can leverage the fact the archive is encrypted (thus tampering the archive would be seen when unciphering it) and that dar uses of several mechanismes (compression, clear text CRC) to detect data corruption.

    Security restriction when signing an archive for multiple recipients

    For a multi recipient archive, any recipient has decode the signed and encrypted key. One could even reuses this same encryption key which may also be signed by the original sender to build a new archive with totally different content. Since 2.7.0 a warning is issued when this condition is met, but the action is still allowed as it may make sense when exchanging data between a set of people that trust each other, but still need to validate the authenticity of the exchanged data (over Internet for example).

    But to exploit this security restriction the attack should still be difficult as the table of content (the catalogue) should is also signed within the archive. Touching any part of it would break this signature, thus the archive table of content cannot be tampered if the signature algorithm is strong. This catalogue contains the CRC of any file data, EA and FSA, as well as the size of the compressed data. Thus the attacker may modify the file's data but must manage for the resulting modification to still satisfy the CRC and have the same lenght. Worse, if compression is used, the modification must not exceed the amount of compressed data and must stay uncompressible by the same compression algorithm that was used for that file in the original archive.

    In short, if this security restriction exists and in exploitable in therory, it is not so easy to realize it for a large portion of the archive.

    Multi-threading in libdar

    Ciphering

    The ciphering in dar/libdar is performed by large block of data (see -# option and consorts), it was thus possible (added in 2.7.0) to parallelize the ciphering/deciphering operation assigning different blocks to different threads without changing the archive format. The measured performance gain is quite good, but usually hidden by the compression load or disk I/O. More than 2 threads for ciphering/deciphering does not bring much benefit to the overall processing time.

    To assign blocks to worker threads (those that compute the ciphering algorithm), two helper threads are needed: one that copy data into block of memory and scatters it to workers for they can work in total autonomy, and another one that gather the resulting block of memory in the correct order.

    If performance gain is present and no change in the archive format was necessary to leverage multi-threading for ciphering/deciphering, however, this has some impact on memory usage required by dar. Each thread will need a pair of blocks (one for the clear data, one for the ciphered equivalent). In addition, the gathering and scattering structures of blocks to/from the workers (which we will globally design under the term ratelier can hold each N block of memory (N being the number of workers) in fact the size of these structures has been chosen a bit wider: N+N/2. Last the helper thread that feeds data to the worker may hold one pair of block under construction, and the gathering thread may obtain from the ratelier_gather a whole structure (N+N/2 blocks) to work with. For N worker, the number of memory block is N+N/2 for each ratelier plus N for each worker, plus N+N/2 for the gathering thread and one for the scattering thread, which makes around 7*N blocks of additional memory requirement. Concretely, having 5 threads with encryption block of 10240 bytes (10 KiB) leads dar to allocate 350 KiB (7x5x10) of additional memory compared to the single threaded implementation.

    Compression

    For compression, things have been more complicated, as before release 2.7.0 compression was done per file in streaming mode. In that mode dar has to provide a sequential flow of bytes to the compression engine and at the same time retrieve the compression/decompression counterpart. The API provided by the different libraries (libz, libbzip2, liblzo2, liblzma5, libzstd, liblz4) do not provide multi-threaded approach for this mode of compressing data, however this gives the best compression ratio you can expect for a given file, given compression algorithm and compression level.

    Before 2.7.0 parallelization had been tried but with little success by running compression and encryption in different threads. Usually one of the threads spent most of its time waiting for the other and the overall performance gain was little due to the added thread management and synchronization overhead. This scheme has been completely abandoned to the one exposed above for encryption. To extend this method to compression, a new compression mode has to be added, to the legacy streaming one to: the block compression mode, which is described below. Note that with release 2.7.0 both mode will coexist, and you will still be able to read old archive and generate new one with the old single thread/streaming approach.

    The implementation is quite similar to the encryption approach, a set of worker proceed to the compression or decompression of one block at a time and two helper threads are used, one feed on one side and the other gather the results on the other side.

    The consequence is a slightly degraded compression ratio when the block size is chosen small. But with larger blocks there is not noticeable compression ratio difference, though it has have an impact on the memory usage. That's a balance to find between what memory mean you have and want to use, how much time you have and want to spend for the compression and how much space you have want to use to store the resulting archive. Choosing a block size of 102400 bytes (see extended syntax of -z option) with 4 threads (see -G option) will lead libdar to allocated 2800 KiB (~ 2.7 GiB) in addition to what libdar already uses for single thread approach (same formula as what is described above for encryption). If you use both parallel encryption and parallel compression at the same time you have to sum the additional memory requirements of each feature.

    Command-line

    The thread number for both encryption and compression is driven by the -G option. See man page for details. Note that, as described above, multi-threaded compression/decompression is only possible with the new compression per block method, which means you have to specify a compression block size (see -z option).

    Streaming versus block Compression mode

    Streaming compression

    Since its first day with version 1.0.0, dar/libdar provides streaming compression. It started with gzip algorithm, has been extended later to bzip2, then lzo, xz/lzma, and since release 2.7.0 two new algorithms have been added: zstd and lz4.

    Streaming compression let libdar provide new decompress data bytes and in return from time to time get back some compressed data bytes from the compression engine. There is no limitation in size: a stream of uncompressed data enters the compression engine from which exists another stream of compressed data. You do not need very large memory buffers to feed and gather data to/from the engine, and can obtain very good compression ratio. Though the process cannot be parallelized as the stream of incoming data must be provided to this single engine in the expected order. The process is similar for decompression, you give back to the engine the compressed data. It does not matter if you feed the engine by smaller or larger groups of bytes than what the engine provided when compressing, as long as you keep the byte sequence order, the resulting uncompressed stream will be the original one you gave for compression.

    Such streaming compression is natively provided by libz, libbz2, libxz/lzma and libzstd. So dar store these compressed data as provided: an unstructured and arbitrarily long sequence of bytes corresponding to the (compressed) data of a given file. Yes, dar compress file per file: a new file, a new stream.

    Block compression

    For lzo algorithm (added with 2.4.0 release) the liblzo2 library does not provide such streaming interface, only a per buffer compression is available. That is to say, you have to provide block of uncompressed data from which you get a variable amount of compressed bytes. To decompress the data you have to provide back this exact block to the decompression engine. If you sticked the blocks together and do not provide them back at their original boundaries, the decompression will fail.

    For that reason dar use a structure inside the compressed file's data to record where each block starts and ends, this way it is possible to feed the decompressing engine with coherent compressed data and get back the original clear data.

    The structure used is made of an arbitrary long sequence of blocks. Each block is constitued of three fields:

    • a Type which has two values, either Data or End of file
    • an block size which is zero for end of file block types
    • a sequence of compressed bytes which length is given by the block size field
    +--------+-----------+---------------------+--------+------------+-----------------+-----+--------+----------------------+
    | Type=D |block size | compressed data ... | Type=D | block size | compressed data | ... | type=E | block size = 0 (EOF) |
    +--------+-----------+---------------------+--------+------------+-----------------+-----+--------+----------------------+

    One thing you cannot guess from a given compressed data is how much uncompressed data it will generate, while you have to prepare and allocate in memory a buffer large enough to receive it. It has been decided to slice the file's data in uncompressed block not larger than the maximum liblzo2 can handle, which is 240 kiB. So at decompression time, whatever is the size of the compressed block we can provide a 240 kiB buffer to receive the decompressed data and repeat the process as much time as necessary to process the whole compressed data of a given file.

    With release 2.7.0 when adding lz4 compression algorithm, it has been found that the "streaming API" of lz4 library was not really a streaming API as it requested to provide back the exact compressed block and not a stream of bytes for the decompression to be possible. For that reason, lz4 streaming is done the exact same way as what has been implemented for lzo streaming: This is a block compression with 240 kiB block size.

    The block compression new feature you have starting release 2.7.0 is thus just an extension of this method by providing to the user the ability to select the maximum size of the uncompressed data block that will be compressed at a time. The -z option has been extended to allow one to specify that block size. If set to zero we stick to the streaming mode, else we use block compression.

    But for a given algorithm (except lzo and lz4) streaming mode and block mode are not compatible, the first has no structure (no block and cannot thus be processed in parallel) the second adds some non-compressed bytes beside the compressed data (the block headers). For that reason the new archive format contains the block size used at creation time in the archive header and trailer, which first let libdar allocate buffer large enough to receive decompressed data and second use the correct mode to retrive the original data.

    Multi threading

    As mentionned earlier, to provide multi-threading support for compression (added at release 2.7.0) the use of per block compression is necessary. In the dar archive, this mode of compression provides the same structured data of compressed blocks as described above for lzo, but with the additional freedom to choose the block size (which stay fixed at 240 kiB for lzo and lz4 pseudo-streaming mode) and any compression algorithm supported by libdar.

    But that is not enough to have per block structure in archive to have multi-threaded processing: you have also to setup the code that read in advance, feed block in parallel to different threads, having each their own compression/decompression engine set and ready for use, gather the resulting work, reordering them if necessary before writing down to filesystem. The method used here is very similar to what was used to add multi-threading suppor for encryption/decryption (see previous chapter)

    Last, if multi-threading is not possible for real streaming compression, it is available in dar/libdar for lz4 and lzo algorithm that are implemented as 240 KiB block compression. For other algorithm, streaming compression use different API of the libraries libdar relies on from the block compression, and is not possible to parallelize.

    dar-2.7.17/doc/presentation.html0000644000175000017520000003547114740171677013477 00000000000000 Dar's Documentation - General Presentation
    Dar Documentation

    Dar Presentation

    General Presentation

    Dar is a command-line software aimed to backup and archiving large live filesystems. It is a filesystem independent and cross platform tool. But, Dar is not a boot loader, nor it is an operating system. It does not create nor format partitions, but it can restore a full filesystem into a larger or a shorter partition, from one partition to several ones, (or the opposite from several to one partition), from a filesystem type to another filesystem type (ext2/3/4 to reiserFS for example).

    Saves all data and metadata
    • it can save and restore hard-linked of any inodes type (hard linked plain files, sockets, char/block devices or even hard linked symlinks (!)),*
    • Solaris's Door files,
    • it takes care of Extended Attributes (Linux, MacOS, ...).
    • MaCOS X file forks
    • ACL (Linux, Solaris,...)
    • It can also detect and restore sparse files, even when the underlying filesystem does not support them, leading to an additional gain in backup space requirement, but mostly at restoration time a disk space optimization that garanty you always be able to restore you backup on a volume of same space (which is not true for backup tools that ignore howsparse file are stored on filesytems)
    Suitable for Live filesystem backup
    Thanks to its ability to detect file change during backup it can retry the backup of a particular file, but has also some mechanism that let the user define some actions for before and after saving a given type of file, before or after entering a given directory, and so on. Such action can be a simple user script or a more complex executable, there is no constraint.
    Embedded Compression
    Of course backup can be compressed with a large variety of protocols (gzip, bzip2, lzma/xz, lzo, zstd, lz4, and more to come), but the compression is done per file, leading to a great backup file robustness at the cost of unoticeable degradation of the compression ratio. But doing let you tell dar which file to compress and which one not trying to, saving a lot of CPU cycles.
    Embedded Encryption
    Strong encryption is available with several well known and reputed algorithms (blowfish, aes, twofish, serpent, camellia... but also by mean of public/private keys integrating GPG encrytion and signing), securing your data is not only a matter of ciphering algorithm, it is also a matter of protection against code book/dictionnary attack. For that purpose, when encryption is activated, data is floating inside the archive at a random position, thanks to two elastic buffers one added at the beginning of the archive, the other at the end. Last a KDF function with salt and parametrable iteration count increase the strength of the human provided key lead the encryption to use a different key even when the human provide the same password/passphrase for two different backups.
    Cloud compatible backup tool
    in addition to embedded encrytion, dar can directly use SSH/SFTP or FTP to write and read your backup to a remote storage (Cloud, NAS,...), without requiring any local storage. You can also leverage the possibily to split a backup in files of given size (called slices) to store your backup on removable storage (tapes, disks,...) even low end Blue-Ray, DVD-RW, CD-RW, ... or floppies (!) if you still have them... In that context it may be interesting to also leverage the easy integration of dar with Parchive to not only detect corruption and prevent restoring a corrupted system unnoticed, but also to repair your backup.
    Many backup flavors available
    Dar can perform full backup1, incremental backup2, differential backup3 and decremental backup4. It also records files that have been removed since the last backup was made, leading the restoration of a system to get the exact same state it was at the time of the differential/incremental/decremental backup (removing files that ought to be removed, adding files that ought to be added and modifing files as expected).
    Binary Delta
    For differential and incremental backups, you can also leverage the binary delta which leads dar to create patch of large files when they change instead of saving them all at once even if a few bytes changed (mailboxes, and so on.). A filtering mechanism let you decide which file can be saved as patch when they change and which one will always be saved as a whole when they change.
    Easy automation
    User commands and scripts can be run from dar at each new slice boundary, but also before and after saving some specified type of files and directories. It also provides a documented API and Python binding.
    Good quality software
    Dar was born in 2002 and thanks to its modular source code and highly abstracted datastructures, the many features that were added since then, never lead the developper to touch already existing features for that. Modularity and abstraction are the two pilars of the dar/libdar quality.

    Dar is easy to use

    While dar/libdar provide a lot of features we will not mention here, you can use dar without having the knowledge of all of them. In its most simple form, dar can be used only with few options, here follows some example of use, that should not need additional explanations:

    Backing up all the /usr directory:

    dar --create my_backup --fs-root / --go-into usr
    Restoration (restoring /usr in a alternate directory):

    dar --extract my_backup --fs-root /some/where/else
    Testing backup sanity:

    dar --test my_backup
    Comparing a backup content with the existing filesystem:

    dar --diff my_backup --fs-root /

    Dar is well documented

    A big effort has been made on documentation, but does not mean you have to read it all to be able to use dar, as this one is very easy to use:

    • most needs are covered by the tutorial or mini-howto
    • and for direct explanation of common questions by the FAQ.
    • Then, if you like or if you need, you can also look at the detailed man pages for a particular feature (These man documents are the reference for each command-line tool you will get very detailed explanations).
    • You may also find some help on the dar-support mailing-list where a bit more than a hundred of subscribed users can help you.

    Dar's documentation is big because it also includes all that may be useful to know how to use libdar, which is intended for developers of external application relying on this library. For those even more curious there is also the documentation about dar's internals: libdar's structure, archive format, which can ease the understanding of the magic that makes all this working and gives a better understanding of dar/libdar code, which is written in C++. But, no, you do not need to read all this to just use dar! ;-)

    Follows an abstracted list of features if you want to know more about dar/libdar from high level point of view

    Known Projects relying on dar or libdar

    Projects in alphabetical order:

    • AVFS is virtual file system layer for transparently accessing the content of archives and remote directories just like local files.
    • backup.pl script by Bob Rogers, creates and verifies a backup using dump/restore or using dar
    • Baras by Aaron D. Marasco it a rewriting in Perl of SaraB.
    • new in 2022: dar-backup by Per Jensen, to automate and simplify the use of dar with redundancy, remote backup, backup testing after transfer and many other interesting features, like for example the backup definitions and logs management
    • Dar-incdec-repo by Dan A. Muresan is a framework for doing periodic DAR backups with minimal fuss
    • dar_fuse by !evil. dar_fuse provides a faster AVFS equivalent thanks to its direct use of libdar python API and fusepy module.
    • Darbup by Carlo Teubner. One of darbup key features is its ability to automatically delete old archives when the total space taken up by existing archives exceeds some configured maximum
    • Darbrrd by Jared Jennings, to back up a few hundred gigabytes of data onto dozens of optical discs in a way that it can be restored ten years later.
    • DarGUI by Malcolm Poole is a front-end to dar providing simple and graphical access to the main features of dar.
    • Disk archive interface for Emacs by Stefan Reichör
    • gdar by Tobias Specht, a graphical user interface to browse and extract dar archives
    • HUbackup (Home User backup) by SivanGreen
    • kdar is a KDE-3 Graphical User Interface to dar made by Johnathan Burchill
    • Lazy Backup by Daniel Johnson. Lazy Backup is intended to be so easy even lazy people will do their backups
    • A Dar plugin has been made by Guus Jansman for Midnight commander (mc)
    • SaraB: Schedule And Rotate Automatic Backups - by Tristan Rhodes. SaraB works with DAR to schedule and rotate backups. Supports the Towers of Hanoi, Grandfather-Father-Son, or any custom backup rotation strategy.

    If a project you like is missing, you are welcome to contact dar's author for it to be referred here (contact coordinates can be found in the AUTHOR file of the source package).


    1 Full backup: A full backup is a backup of a full filesystem or of a subset of files where, for each file, the archive contains all the inode information (ownership, permission, dates, etc.) file's data and eventually file's Extended Attributes.

    2 Differential backup: A differential backup is based on a full backup. It contains only the data and Extended Attributes of files that changed since the full backup was made. It also contains the list of files that have been removed since the full backup was made. For files that did not change, it contains only the inode information. The advantage is that the backup process is much faster, the space required is also much lower. The drawback is that you need to restore the full backup first, then the differential backup to get the last saved state of your system. But if you want the last version of a file that changed recently you only need the last differential backup.

    3 Incremental backup: An incremental backup is essentially the same thing as a differential backup. Some make a difference, I do not. The only point I see is that the incremental backup is not based on a full backup but on a differential backup or on another incremental one.

    4 Decremental backup: A decremental backup is a backup method in which the most recent backup is a full backup, while the oldest backup are a difference compared to that full backup. The advantage of such type of backup is the you can restore easily your system in the last state it had using only the last backup. And, if you want to restore it in the state it had some time before, then you can restore the last backup (full backup), then the previous archive (a decremental backup) and so on. As you most usually want to restore the system in its last available state, this makes restoration much more easy compared to doing incremental backups. However, this suffer from a important drawback, which is that you need to transform the last backup into a decremental backup when comes the time to make another backup. Then you have to remove the former full backup and replace it by its decremental version.

    dar-2.7.17/doc/authentification.html0000644000175000017520000000350414764316042014303 00000000000000 DAR - Disk ARchive - Authentification page
    Dar Documentation

    DAR's Authentication

    PGP/ GnuPG key

    GPG key has been renewed over time, the latest is the one to use to exchange data with the author, while older keys are still necessary to validate the integrity of older releases. Depending on the release time, the following keys have been used:

    From April 2nd, 2002 to September 12th, 2012
    all released packages and communications have been signed with the following PGP public key, having a fingerprint of 3D7F 383C B41E 33D7 0250 A9AC A42E 4223 C818 1A52
    From September 13th, 2012 to January 5th, 2022
    this PGP public key was in use. It has the fingerprint: 3B31 29AF 1DDD EFA5 A37D 818F 0831 B0BD 03D8 B182
    Since January 6th, 2022
    this new key PGP public key is in use. It has the fingerprint: 1BE4 7606 A74F 178C 7328 43B0 5F64 5B19 16D5 6546

    The signature does only proves that Denis Corbin has personally released the sources or binary package. This means there is no malicious code inside the signed packages (If you trust him, of course).

    dar-2.7.17/doc/Makefile.in0000644000175000017520000006153014767510000012120 00000000000000# Makefile.in generated by automake 1.16.5 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2021 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ ####################################################################### # dar - disk archive - a backup/restoration program # Copyright (C) 2002-2024 Denis Corbin # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # # to contact the author, see the AUTHOR file ####################################################################### VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = doc ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/gettext.m4 \ $(top_srcdir)/m4/host-cpu-c-abi.m4 $(top_srcdir)/m4/iconv.m4 \ $(top_srcdir)/m4/intlmacosx.m4 $(top_srcdir)/m4/lib-ld.m4 \ $(top_srcdir)/m4/lib-link.m4 $(top_srcdir)/m4/lib-prefix.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/nls.m4 \ $(top_srcdir)/m4/po.m4 $(top_srcdir)/m4/progtest.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(dist_noinst_DATA) \ $(dist_pkgdata_DATA) $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(pkgdatadir)" DATA = $(dist_noinst_DATA) $(dist_pkgdata_DATA) RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir distdir-am am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` DIST_SUBDIRS = $(SUBDIRS) am__DIST_COMMON = $(srcdir)/Makefile.in README DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CXXSTDFLAGS = @CXXSTDFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DOXYGEN_PROG = @DOXYGEN_PROG@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GETTEXT_MACRO_VERSION = @GETTEXT_MACRO_VERSION@ GMSGFMT = @GMSGFMT@ GMSGFMT_015 = @GMSGFMT_015@ GPGME_CFLAGS = @GPGME_CFLAGS@ GPGME_CONFIG = @GPGME_CONFIG@ GPGME_LIBS = @GPGME_LIBS@ GPGRT_CONFIG = @GPGRT_CONFIG@ GREP = @GREP@ HAS_DOT = @HAS_DOT@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ INTLLIBS = @INTLLIBS@ INTL_MACOSX_LIBS = @INTL_MACOSX_LIBS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL_CFLAGS = @LIBCURL_CFLAGS@ LIBCURL_LIBS = @LIBCURL_LIBS@ LIBICONV = @LIBICONV@ LIBINTL = @LIBINTL@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTHREADAR_CFLAGS = @LIBTHREADAR_CFLAGS@ LIBTHREADAR_LIBS = @LIBTHREADAR_LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBICONV = @LTLIBICONV@ LTLIBINTL = @LTLIBINTL@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MSGFMT = @MSGFMT@ MSGMERGE = @MSGMERGE@ MSGMERGE_FOR_MSGFMT_OPTION = @MSGMERGE_FOR_MSGFMT_OPTION@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ POSUB = @POSUB@ PYEXT = @PYEXT@ PYFLAGS = @PYFLAGS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ UPX_PROG = @UPX_PROG@ USE_NLS = @USE_NLS@ VERSION = @VERSION@ XGETTEXT = @XGETTEXT@ XGETTEXT_015 = @XGETTEXT_015@ XGETTEXT_EXTRA_OPTIONS = @XGETTEXT_EXTRA_OPTIONS@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dot = @dot@ doxygen = @doxygen@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ groff = @groff@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ pkgconfigdir = @pkgconfigdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ tmp = @tmp@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ upx = @upx@ SUBDIRS = samples mini-howto man dist_noinst_DATA = COMMAND_LINE Doxyfile portable_cp Benchmark_tools/README Benchmark_tools/always_change Benchmark_tools/bitflip Benchmark_tools/build_test_tree.bash Benchmark_tools/hide_change Benchmark_tools/historization_feature restoration_dependencies.txt dist_pkgdata_DATA = README Features.html Limitations.html Notes.html Tutorial.html Good_Backup_Practice.html FAQ.html api_tutorial.html dar_doc.jpg dar_s_doc.jpg index.html dar-catalog.dtd authentification.html dar_key.txt old_dar_key1.txt old_dar_key2.txt from_sources.html presentation.html usage_notes.html python/libdar_test.py style.css restoration-with-dar.html benchmark.html benchmark_logs.html index_dar.html index_internal.html index_libdar.html @USE_DOXYGEN_TRUE@DOXYGEN = @DOXYGEN_PROG@ all: all-recursive .SUFFIXES: $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-dist_pkgdataDATA: $(dist_pkgdata_DATA) @$(NORMAL_INSTALL) @list='$(dist_pkgdata_DATA)'; test -n "$(pkgdatadir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(pkgdatadir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkgdatadir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pkgdatadir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(pkgdatadir)" || exit $$?; \ done uninstall-dist_pkgdataDATA: @$(NORMAL_UNINSTALL) @list='$(dist_pkgdata_DATA)'; test -n "$(pkgdatadir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgdatadir)'; $(am__uninstall_files_from_dir) # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile $(DATA) all-local installdirs: installdirs-recursive installdirs-am: for dir in "$(DESTDIR)$(pkgdatadir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool clean-local mostlyclean-am distclean: distclean-recursive -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dist_pkgdataDATA @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) install-data-hook install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: uninstall-dist_pkgdataDATA @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) uninstall-hook .MAKE: $(am__recursive_targets) install-am install-data-am \ install-strip uninstall-am .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am all-local \ check check-am clean clean-generic clean-libtool clean-local \ cscopelist-am ctags ctags-am distclean distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-data-hook install-dist_pkgdataDATA \ install-dvi install-dvi-am install-exec install-exec-am \ install-html install-html-am install-info install-info-am \ install-man install-pdf install-pdf-am install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs installdirs-am maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags tags-am uninstall \ uninstall-am uninstall-dist_pkgdataDATA uninstall-hook .PRECIOUS: Makefile @USE_DOXYGEN_TRUE@all-local: Doxyfile.tmp @USE_DOXYGEN_TRUE@Doxyfile.tmp: @USE_DOXYGEN_TRUE@ sed -e "s%##VERSION##%@PACKAGE_VERSION@%g" -e "s%##HAS_DOT##%@HAS_DOT@%g" -e 's%##SRCDIR##%$(abs_top_srcdir)%g' -e 's%##BUILDDIR##%$(abs_top_builddir)%g' '$(srcdir)/Doxyfile' > Doxyfile.tmp @USE_DOXYGEN_TRUE@ cd '$(top_srcdir)' ; $(DOXYGEN) '$(abs_top_builddir)/doc/Doxyfile.tmp' @USE_DOXYGEN_TRUE@ if [ -d html/search ]; then chmod u+x html/search ; fi @USE_DOXYGEN_TRUE@clean-local: @USE_DOXYGEN_TRUE@ rm -rf html Doxyfile.tmp doxygen_sqlite3.db @USE_DOXYGEN_TRUE@install-data-hook: @USE_DOXYGEN_TRUE@ '$(srcdir)/portable_cp' html $(DESTDIR)$(pkgdatadir) @USE_DOXYGEN_TRUE@ $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/python @USE_DOXYGEN_TRUE@ $(INSTALL) -m 0644 '$(srcdir)/python/libdar_test.py' $(DESTDIR)$(pkgdatadir)/python @USE_DOXYGEN_TRUE@uninstall-hook: @USE_DOXYGEN_TRUE@ rm -rf $(DESTDIR)$(pkgdatadir)/html @USE_DOXYGEN_TRUE@ rm -rf $(DESTDIR)$(pkgdatadir)/python @USE_DOXYGEN_TRUE@ rmdir $(DESTDIR)$(pkgdatadir) || true @USE_DOXYGEN_FALSE@all-local: @USE_DOXYGEN_FALSE@clean-local: @USE_DOXYGEN_FALSE@install-data-hook: @USE_DOXYGEN_FALSE@uninstall-hook: # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: dar-2.7.17/doc/api_tutorial.html0000644000175000017520000032333214403564520013441 00000000000000 Libdar API - Tutorial
    Dar Documentation

    Libdar APplication Interface (API) tutorial

    (API version 6.x.x)

    Presentation

    The Libdar library provides a complete abstraction layer for handling Dar archives. The general operations provided are:

    • archive creation,
    • file extraction from archive,
    • archive listing,
    • archive testing,
    • archive comparison,
    • catalogue isolation,
    • archive merging,
    • archive reparation
    • dar_manager database manipulations
    • dar_slave steering
    • dar_xform operation

    Note that Disk ARchive and libdar have been released under the Gnu General Public License (GPL). All code linked to libdar (statically or dynamically), must also be covered by the GPL or compatible license. Commercial use is prohibited unless otherwise explicitely agreeded with libdar's author.

    This tutorial will show you how to use the libdar API. Since release 2.0.0 the dar command-line executable also relies on this API, looking at it's code may provide a good illustration on the way to use libdar, the file src/dar_suite/dar.cpp is the primary consumer of the libdar API. However we will see here, step by step how you can leverage the API for your own use.

    In the following sample codes will be provided, they are solely illustrative and is not guaranteed to compile. Reference documentation for this API is contained in the source code as doxygen comment which can be extracted and "compiled" in the the doc/html directory when compiling dar/libdar. This API reference document is also available online and will be referred below as the API reference documentation.

    Let's Start

    Conventions

    Language

    Dar and libdar are written in C++, and so is the libdar fundamental API. Though you can find a Python binding in the src/python subdirectory, which strongly follows the class hierarchy that will see here with C++ sample code. The libdar_test.py document rawly follows with python code what we will see below with C++.

    Feel free to contribute if you want bindings for other languages.

    Header files

    Only one include file is required in your program to have access to libdar:

    #include <dar/libdar.hpp>

    Libdar namespace

    All libdar symbols are defined within the libdar namespace. You can either add the using namespace libdar; statement at the beginning of your source files...

    using namespace libdar; get_version(); ...

    ...or, as shown below, you can explicitly use the namespace in front of libdar symbols, we will use this notation in the following, which has the advantage of avoiding name conflicts and clarify origin of the symbols used, but lead to a more heavier code, less easy to read:

    libdar::get_version(); ...

    Exceptions

    The library makes use of exception to report unexpected conditions. These contain the reason and context the error occurred in, so you can be catch these in your code to display details of the error and its cause. All exceptions used within libdar inherit from the pure virtual class libdar::Egeneric

    Most of the time you will use only one of the following two methods:

    • std::string & get_message() const
    • std::string & dump_str() const
    get_message()
    returns a message string describing the error met
    dump_str()
    returns a text paragraph with additional information about the stack as well as context the error occurred in.

    Now, messages are for human, you may need to provide different behaviors depending on the type of the error libdar has met and which triggers such exception. This is can be done by checking the type of the exception.

    We will only focus on most common exception type read the API reference documentation for an exhaustive list of the exception type used by libdar:

    libdar::Ebug
    This one is used when a-situation-that-should-never-occur is met and can be assumed to be a bug in libdar. Using the get_message() method in that situation would not provide all necessary details to understand and fix the bug, so it is advised to always use dump_str() for that specific type of exception and abort further execution regarding libdar.
    libdar::Erange
    A parameter or a value is out of range, details is provided with get_message()
    libdar::Ememory
    Libdar lacked of virtual memory to proceed to the requested operation
    libdar::Einfinint
    an arythmetic error occurred when using libdar::infinint object, which is a internal integer type that can handle arbitrary large numbers. Today it is only used if libdar has been compiled with --enable-mode=infinint so you are not likely to meet it.
    libdar::Elimitint
    when infinint are not used, a wrapper class over system integers is used to detect integer overflow. In that situation, this exception is thrown, enjoying the user to use a infinint based flavor of libdar to avoid this error
    libdar::Ehardware
    used when the operating system returned an I/O error or hardware related error
    libdar::Esystem
    When software related error is reported by the system, like lack of ownership, permission or non existing file...
    libdar::Euser_abort
    This exception is thrown as the consequence of user request to abort the process. This may occur when aswnering to a question issued by libdar.
    libda::Ethread_cancel
    used when a program has called the thread cancellation routine of libdar, which drives libdar to stop as soon as possible (immediately when the control point is met, or delayed aborting cleanly the operation under process depending on the option used to cancel the running libdar thread.

    Some other exist and are well described in the API reference documentation. You can thus wrap the whole libdar interaction with a statement like in the following example:

     try  {   // calls to libdar   ...   //  }  catch(libdar::Ebug & e)  {   std::string msg = e.dump_str();     // do something with msg like for example:   std::cerr << msg << std::endl;  }  catch(libdar::Euser_abort & e)  {   // some specific treatment for this   // type of exception   ...  }  catch(libdar::Egeneric & e)  {   // a global treatment for all other libdar exceptions   std::string msg = e.get_message(); << std::endl;   ...  }

    First we must initialize libdar

    the libdar initialization is performed by calling the libdar::get_version() function.

    This function can be called several time though only once is necessary, but this call has to complete before any other call to libdar.

    In a multi-thread context libthreadar initialization is not re-entrant. In other word the first call to libdar::get_version() must complete before any other call to libdar can take place in another thread of the running process. Once libdar has been initialized, you can call libdar::get_version() concurrently from several threads at the same time without problem.

    libdar::get_version();

    We should prepare the end right now

    Libdar used some data-structures (mutex, secured memory, etc.) that need to be released properly before ending the process. It is important to invoke the close_and_clean() function before exiting your program if you had called get_version() previously. After that, the only allowed call to libdar is get_version().

    libdar::close_and_clean()
    Note:
    closes_and_clean() makes the necessary for memory to be released in the proper order. Not calling close_and_clean() at the end of your program may result in uncaught exception message from libdar at the end of the execution. This depends on the compiler, libc and option activated in libdar at compilation time.

    All in one, at the highest level, you code should look like the following:

     libdar::get_version();  try  {   try   {     // calls to libdar   // thing we will see in next   ...   ...   }   catch(libdar::Ebug & e)   {   std::string msg = e.dump_str();     // do something with msg like for example:   std::cerr << msg << std::endl;   }   catch(libdar::Egeneric & e)   {   std::string msg = e.get_message();     // do something with msg like for example   std::cerr << msg << std::endl;   }  }  catch(...)  {   libdar::close_and_clean();   throw;  }  libdar::close_and_clean();

    Intercepting signals

    libdar by itself does not make use of any signal (see signal(2) and kill(2)). However, gpgme library with which libdar may be linked with in order to support asymmetrical strong encryption (i.e. encryption using public/private keys) may trigger the PIPE signal. Your application shall thus either ignore it (signal(SIGPIPE, SIG_IGN)) or provide an adhoc handle. By default the PIPE signal leads the receiving process to terminate.

    Libdar classes

    The main components of libdar are four classes:

    • class libdar::archive to play with dar archives
    • class libdar::database to play with dar_manager databases
    • class libdar::libdar_slave to take the role of dar_slave
    • class libdar::libdar_xform to re-slice existing archives like dar_xform does

    In the following we will first see class libdar::archive which will take most of our effort as other classes which we will see at the end are very trivial.

    Multithreaded environment

    Except when explicitely mentioned, a given libdar object can only be manipulated by a single thread. You can however perform several operations concurrently from different thread, each having its own set of libdar objects. Though, if one thread is creating an archive by mean of an first object and at the same time another thread by mean of a second object is trying to read the same archive under construction, things might not work as expected. But this is obvious considerations we will not dig any further assuming you know what you are doing.

    Let's create a simple archive

    Creating a libdar::archive object depending on the constructor used, leads to either:

    • the creation of a brand new archive on filesystem, thus performing a backup (full, incremental, differential, decremental, ...)
    • the opening an existing archive, for further operation (listing, file restoration, archive testing, archive difference, ...)
    • merging the content of two existing archives into a new one
    • the reparing of an archive which catalogue is missing or damaged. The catalogue (which means catalog in French) is the table of content of an archive.

    Basic archive creation

    For archive creation the constructor format is the following one:

     archive::archive(const std::shared_ptr<user_interaction> & dialog,   const path & fs_root,   const path & sauv_path,   const std::string & filename,   const std::string & extension,   const archive_options_create & options,   statistics * progressive_report);

    For now we will left beside some parameters, that we will see in detail later:

    • dialog can be set to std::nullptr for now, this means that all interaction with the user will take place by mean of standard input, output and error.
    • fs_root is the directory to take as root of the backup. The libdar::path class can be setup from a std::string, which in turn can be setup from a classical char*
    • sauv_path is the path where to write the archive to, here also a std::string will do the job
    • filename is the slice name to use for the archive we will create.
    • extension is the archive extension to use. There is no reason to not use the string "dar" here
    • options is a class that carries all optional parameters, it contains a constructor without argument setting all options are set to their default values
    • statistics can receive the address of an existing object that another thread can read while a first one is doing a backup operation (a first case of libdar object that can be used by several threads at the same time). We will see this feature later on, but for now let's set this to a null pointer (i.e.: std::nullptr)

    Once the object has been created (the constructor has returned), the archive operation has completed and a new backup has been completely written on disk.

     libdar::archive my_first_backup(nullptr,   "/home/me",   "/tmp",   "first_backup",   "dar",   libdar::archive_options_create(),   std::nullptr);

    The previous example has create a single sliced archive first_backup.1.dar located under /tmp. This backup contains the content of the directory /home/me and its sub-directories, without compression and without ciphering. You have guessed: compression, slicing, ciphering can be set by playing with passing an adhoc archive_option_create object to this archive constructor, something we will see in details a bit further.

    Once the object my_first_backup has been created there is only little thing we can do with it: only archive listing or archive isolation. For archive extraction, testing or diffing, we needs creating a new object with a "read" constructor.

    If we had allocated the archive on the heap (using new), we would have just added the delete invocation right after the construction of the my_first_backup object:

     libdar::archive* my_first_backup = new libdar::archive(nullptr,   "/home/me",   "/tmp",   "first_backup",   "dar",   libdar::archive_options_create(),   nullptr);     // we assume std::bad_alloc would be thrown if an allocation   // problem had occurred     // same thing if libdar throws an exception at constructor time,   // the object would not be created and would not have to be deleted.     // So now we can delete the created object:    delete my_first_backup;

    Progressive report

    During the operation we get nothing shown unless an error occurs. To provide more visibility on the process we will use an libdar::statistics object passed as last argument of this constructor. Then we will use some interesting method of class libdar::statistics:

    • std::string get_treated_str()
    • std::string get_hard_links_str()
    • std::string get_skipped_str()
    • std::string get_inode_only_str()
    • std::string get_ignored_str()
    • std::string get_tooold_str()
    • std::string get_errored_str()
    • std::string get_deleted_str()
    • std::string get_ea_treated_str()
    • std::string get_byte_amount_str()
    • std::string get_fsa_treated_str()

    If you have a doubt about the meaning and use of a particular counter in a particular operation, please refer to API reference documentation for class libdar::statistics, the private fields corresponding to these counter are explicitly defined there.

     libdar::statistics stats;    libdar::archive my_first_backup(nullptr,   "/home/me",   "/tmp",   "first_backup",   "dar",   libdar::archive_options_create(),   & stats);     // in another thread we can see the progression:    std::cout << stats.get_treated_str()   << " file(s) saved" << std::endl;    std::cout << stats.get_errored_str()   << " file(s) failed to backup" << std::endl;    std::cout << stats.get_ea_treated_str()   << " Extended Attributes saved" < std::endl;

    Archive creation options

    in the previous example, we have created an temporary object of class libdar::archive_options_create and passed it on-fly to the archive constructor without modifying it. Thus we used the default options for this operations. But a lot of options are available, each one can be modified by a specific method of this class Follow is a subset of the available options. We won't details them all, but you can refer the doxygen documentation for class libdar::archive_options_create for additional information.

    • void set_reference(std::shared_ptr<libdar::archive> ref_arch)
    • void set_selection(const libdar::mask & selection)
    • void set_subtree(const libdar::mask & subtree)
    • void set_allow_over(bool allow_over)
    • void set_warn_over(bool warn_over)
    • void set_info_details(bool info_details)
    • void set_display_treated(bool display_treated, bool only_dir)
    • void set_display_skipped(bool display_skipped)
    • void set_display_finished(bool display_finished)
    • void set_pause(const libdar::infinint & pause)
    • void set_empty_dir(bool empty_dir)
    • void set_compression(libdar::compression compr_algo)
    • void set_compression_level(libdar::U_Icompression_level)
    • void set_slicing(const libdar::infinint & file_size, const libdar::infinint & first_file_size)
    • void set_ea_mask(const libdar::mask & ea_mask)
    • void set_execute(const std::string & execute)
    • void set_crypto_algo(libdar::crypto_algocrypto)
    • void set_crypto_pass(const libdar::secu_string & pass)
    • void set_compr_mask(const libdar::mask & compr_mask);
    • void set_min_compr_size(const libdar::infinint & min_compr_size)
    • void set_nodump(bool nodump)
    • void set_exclude_by_ea(const std::string & ea_name)
    • void set_what_to_check(libdar::comparison_fields what_to_check)
    • void set_hourshift(const libdar::infinint & hourshift)
    • void set_empty(bool empty)
    • void set_alter_atime(bool alter_atime)
    • void set_furtive_read_mode(bool furtive_read)
    • void set_same_fs(bool same_fs)
    • void set_snapshot(bool snapshot)
    • void set_cache_directory_tagging(bool cache_directory_tagging)
    • void set_fixed_date(const libdar::infinint & fixed_date)
    • void set_slice_permission(const std::string & slice_permission)
    • void set_slice_user_ownership(const std::string & slice_user_ownership)
    • void set_slice_group_ownership(const std::string & slice_group_ownership)
    • void set_retry_on_change(const libdar::infinint & count_max_per_file, const libdar::infinint & global_max_byte_overhead)
    • void set_security_check(bool check)
    • void set_user_comment(const std::string & comment)
    • void set_hash_algo(libdar::hash_algo hash)
    • void set_slice_min_digits(libdar::infinint val)
    • void set_backup_hook(const std::string & execute, const mask & which_files);
    • void set_delta_diff(bool val)
    • void set_delta_signature(bool val)
    • void set_delta_mask(const libdar::mask & delta_mask)

    First you may have find some new libdar types (=classes) in arguments, we will briefly explain how to set them:

    std::shared_ptr<libdar::archive>

    C++11 shared smart-pointer to an existing libdar::archive object. We will see how to use it next when performing differential backup

    libdar::infinint

    can be set from a classical unsigned int, unsigned long or other unsigned integer type, so for you this is like another unsigned integer type

    libdar::mask

    This class is a top of a class hierarchy containing several classes provided with libdar. It allows you to define a filtering mecanism for the feature wher it is used. We will see how to use it in a further paragraph of this tutorial.

    libdar::compression

    It is an enumeration which values are (among others not listed here):

    • libdar::compression::gzip
    • libdar::compression::bzip2
    • libdar::compression::xz
    • libdar::compression::lzo

    libdar::U_I

    A the opposite of the libdar::infinint this is not a class but an alias to the system classical unsigned integer. Depending on the operating system and CPU this might point to unsigned long or unsigned long long or other equivalent type.

    libdar::crypto_algo

    This is an enumeration with values like:

    • libdar::crypto_algo::scrambling
    • libdar::crypto_algo::blowfish
    • libdar::crypto_algo::aes256
    • libdar::crypto_algo::twofish256
    • libdar::crypto_algo::serpent256
    • libdar::crypto_algo::camellia256

    libdar::secu_string

    This is a class used to securely storing password and sensible cryptographic information. It can be setup from a char* or better, from a filedescriptor. its main constructor is:

    • secu_string(const char* ptr, U_I size)

    libdar::comparison_fields

    This is an enumeration with values like:

    • libdar::comparison_fields::all
    • libdar::comparison_fields::ignore_owner
    • libdar::comparison_fields::mtime
    • libdar::comparison_fields::inode_type

    Follows a variant of the previous backup creation example, here we set some options to a given value:

     libdar::archive_options_create opt;    opt.set_allow_over(false);   // forbids slice overwriting    opt.set_display_finished(true);   // show a summary after each completed directory    opt.set_slicing(1024000,   2000);   // slices of 1000 kiB initial slice of 2000 bytes    opt.set_pause(2);   // pause every two slices    opt.set_execute("echo slice %N completed");   // command executed after each slice    opt.set_crypto_algo(libdar::crypto_algo::aes256);     // providing an empty secu_string leads dar   // interactively ask the passphrase in a secure manner  opt.set_crypto_pass(secu_string());   // But this previous call is useless as en empty secu_string is the default   // though one could have setup a secu_string from a std::string   // this way:  std::string my_pass("hello world!");  libdar::secu_string my_secupass(my_pass.c_str(), my_pass.size());  opt.set_crypto_pass(my_secupass);    opt.set_compression(libdar::compression::xz);  opt.set_compression_level(6);  opt.set_min_compr_size(10240);  // not trying compressing file smaller than 10 kiB     // now we have the opt option object ready   // we can proceed to the archive creation using it:    libdar::archive my_first_backup(nullptr,   "/home/me",   "/tmp",   "first_backup",   "dar",   opt,   nullptr);

    And of course, you can use both libdar::statistics and libdar::archive_options_create at the same time, when creating a backup

    Creating a differential or incremental backup

    Maybe you have guessed? Compared to the previous operation (full backup) doing an differential or incremental backup will only ask to open, in read mode, an existing archive and pass this object as argument of class archive_options_create::set_reference() seen just above.

    The read-only constructor for class archive is the following:

    archive::archive(const std::shared_ptr<user_interaction> & dialog, const path & chem, const std::string & basename, const std::string & extension, const archive_options_read & options);

    same as before:

    • dialog can be set to a null pointer, we will see further in this tutorial how to play with user_interaction class
    • chem is the path leading to the archive to read, it can be provided as a std::string or even a char*
    • basename is the archive basename to read
    • extension should be "dar" unless you want to confuse people
    • options can be set to an empty object for default options, we will see this class in more details with archive listing

      // first we open the previously created archive in read mode:    std::shared_ptr<libdar::archive> ref_archive;    ref_archive = std::make_shared<libdar::archive>(nullptr,   "/home/me",   "first_backup",   "dar",   archive_create_options_read());     // for clarity we separated the creation of the   // archive object used as archive of reference from   // the archive object which creation will perform   // differential backup (see below). We could have made   // in one step using an anonymous temporary object   // (in place of ref_archive)   // invoking directly std::make_shared   // when calling set_reference() below:    libdar::archive_options_create opt;    opt.set_reference(ref_archive);    libdar::archive my_second_backup(nullptr,   "/home/me",   "/tmp",   "diff_backup",   "dar",   opt,   nullptr);

    creating a incremental backup is exactly the same, the difference is the nature of the archive of reference. We used to describe a differential backup one that has a full backup as reference, while an incremental backup has another incremental or differential backup as reference (not a full backup).

    Archive listing

    Archive listing operation consist of the creation of an archive object in read mode as we just did above for ref_archive and invoking a method on that newly object to see all or a sub-directory content of the archive. Before looking at the listing method let's zoom on the class libdar::archive_create_options_read which we just skip over previously.

    Archive reading options

    The same as the class archive_options_create detailed above, the class archive_options_read has a constructor without argument that sets the different options to their default value. You can change them one by one by mean of specific methods. The most usual ones are:

    • void set_execute(const std::string & execute)
    • void set_info_details(bool info_details)
    • void set_sequential_read(bool val)
    • void set_slice_min_digits(infinint val)

    set_execute() runs a command before reading a new slice of the archive. See API reference documentation for details. You will meet class archive_options_create in order to test an archive, compare an archive with filesystem, isolate an archive and repair an archive.

    Listing methods

    There is several ways to read an given archive contents:

    1. making use of a callback function that will be called in turn for each entry of the archive even special entries that flag the end of a directory and the next entry will be located in the parent directory:

       void op_listing(libdar::archive_listing_callback callback,   void *context,   const libdar::archive_options_listing & options) const;
    2. using the same callback but only for the different entries of a given directory, directory that has to exist in the archive of course. It returns false when the end of the directory has been reached:

       bool get_children_of(libdar::archive_listing_callback callback,   void *context,   const std::string & dir,   bool fetch_ea = false);
    3. like previous listing a given directory content but returning a vector of objects libdar::list_entry that provide detailed information about each entry, no callback is used here:

       const std::vector<libdar::list_entry> get_children_in_table(const std::string & dir,   bool fetch_ea = false) const;

    For the two first methods you have to define a callback function of the following form:

     void (*)(const std::string & the_path,   const list_entry & entry,   void *context);

    This callback will receive as argument the full path of the object, a libdar::list_entry object providing much details on it and the "context" value passed as argument of archive::op_listing() or archive::get_children_of()

    In the following example we will use only a few methods of class libdar::list_entry that are available to get detail of a given entry of an archive, feel free to explore this class's documentation for to get all details:

      // we first create a read-mode archive object that will be used   // in the three following examples,    libdar::archive_options_read opt;    opt.set_info_details(true);  opt.set_execute("echo 'about to read slice %p/%b.%N.%e with context %c'");    libdar::archive my_backup(nullptr, // this is user_interaction we will see further   "/home/me",   "diff_backup",   "dar",   opt);
        // we will also illustrate here the use of libdar::archive_options_read   // inside the callback function we will define here:    void my_listing_callback(const std::string & the_path,   const libdar::list_entry & entry,   void *context)  {   std::cout << the_path;   if(entry.is_dir())   std::cout << " is a directory";   std::cout << " with permission " << entry.get_perm();   std::cout << " file located in slices " << entry.get_slices().display();   std::cout << std::endl;   // yep, we do not need context, this   // is available if you need it though     if(entry.is_eod())   {   // only op_listing() provides such type of object   // which occurs when we reached the End Of Directory   // next entry will be located in the parent directory.   //   // Note for op_listing: when reading a directory we recurs in it,   // meaning that the next entry this callback will be   // invoked for will be located in that directory   //   // for get_children_of() no recursion or eod object is   // performed about directory. The next entry following   // a directory is still located in the same parent directory   // which when fully read stops the get_children_of() routine   // at the difference of op_listing() which parse the whole   // directory tree.   //   // For example, reading a empty directory will provide   // that directory info, then an eod object a the next   // callback invocation.   }  }

    These two objects my_backup and my_listing_callback() we just defined will be used in the following examples.

    Archive listing using archive::op_listing()

    First possibility: we can pass nullptr as callback function to archive::op_listing, all will be displayed in stdout

     my_backup.op_listing(nullptr, // no callback function   nullptr, // we don't use the context here   archive_options_listing()) // and use default listing options

    Second possibility: we use the callback defined previously:

     my_backup.op_listing(my_listing_callback,   nullptr, // we still don't use the context here   archive_options_listing()) // and still the default listing options

    In complement of both previous variant we can of course set non default listing options

     libdar::archive_options_listing opt;    opt.set_filter_unsaved(true);   // skip entry that have not been saved since the archive of reference    opt.set_slice_location(true);   // necessary if we want to have slicing information available in the callback function    opt.set_fetch_ea(false);   // this is the default. Set it to true if you   // want to use list_entry::get_ea_reset_read()/get_ea_next_read()    my_backup.op_listing(my_listing_callback,   nullptr, // we still don't care of context here   opt); // and still the default listing options

    Archive listing using archive::get_children_of()

      // With this method we only list one directory    my_backup.get_children_of(my_listing_callback,   nullptr, // we still don't care of context here   "", // we read the root directory of the archive   true); // and ask for EA retrieval, but as we do not   // use list_entry::get_ea_read_next() in the   // callback this is just wasting CPU and memory       // or course if you have a sub-directory /home/me/.gnupg/private-keys-v1.d   // in your home directory and you want to check how it is saved in the   // archive, as we defined the root of the backup as /home/me and as you   // always have to pass a relative path (no leading /) you could do that by   // calling the following:    my_backup.get_children_of(my_listing_callback,   nullptr,   ".gnupg/private-keys-v1.d");

    Archive listing using archive::get_children_in_table()

      // still listing a single directory but this time without callback function:    my_backup.init_catalogue();   // necessary to fill read the whole catalogue in memory   // in particular if archive has been opened in sequential read mode    std::vector<libdar::list_entry> result = my_backup.get_children_in_table(".gnupg/private-keys-v1.d");     // now reading the std::vector    std::vector<libdar::list_entry>::iterator it = result.begin();  while(it != result.end())  {   if(it->is_dir())   std::cout << " is a directory";   std::cout << " with permission " << it->get_perm();   std::cout << " located in slices " << it->get_slices().display();   std::cout << std::endl;  }

    Testing an archive

    As seen for listing operation we assume a archive object has been create in read mode. Testing the coherence of the relative archive files on disk is done by calling the libdar::op_test method:

     libdar::statistics op_test(const libdar::archive_options_test & options,   libdar::statistics * progressive_report);

    You may have recognized the libdar::statistics type we saw for archive creation. It is present as argument and the provided libdar::statistics object can be read during the whole testing operation by another thread. But if you just want the to know the result, you'd better just use the returned value as it makes the operation quicker due to the absence of multithread management.

      // for the exercise, we will change some default options:    archive_options_test opt;  opt.set_info_details(true); // to have a verbose output    libdar::statistics stats;  stats = my_backup.op_test(nullptr, // still the user_interaction we will see further   opt; // the non default options set above   nullptr); // we will just use the returned value    std::cout << stats.get_treated_str() << "file(s) tested" << std::endl;  std::cout << stats.get_errored_str() << " file(s) with errors" << std::endl;  std::cout << stats.get_ea_treated_str() << " Extended Attributes tested" << std::endl;

    Comparing an archive

    As simple as previously, but using the archive::op_diff method:

     statistics op_diff(const path & fs_root,   const archive_options_diff & options,   statistics * progressive_report);

    Over the type of the option field, you see the fs_root argument which define which directory of the filesystem to compare the archive to

      // for the exercise, we will change some default options:    archive_options_diff opt;  opt.set_info_details(true); // to have a verbose output    opt.set_what_to_check(libdar::comparison_fields::ignore_owner);   // this option above will consider equal two files which   // only change due to user or group ownership difference   // by default any difference will be considered a difference    (void)my_backup.op_diff("/home/me",   opt; // the non default options set above   nullptr); // not using it for this example

    Isolating an archive

    Still as simple as previously, but using the archive::op_isolate method:

     void op_isolate(const path &sauv_path,   const std::string & filename,   const std::string & extension,   const archive_options_isolate & options);

    You will find similitude with the archive creation though here this is not a constructor

    • sauv_path is the directory where to create the isolated version of the current archive
    • filename is the archive basename to create
    • extension should still be "dar" here too
    • options are options for isolation like slicing, compression, encryption similar to the archive_options_create class we saw at the beginning of this tutorial
      // for the exercise, we will change some default options:    archive_options_isolate opt;  opt.set_warn_over(false);     // by default overwriting is allowed by a warning is issued first   // here overwriting will take place without warning    opt.set_compression(libdar::compression::gzip);  opt.set_compression_level(9);   // this is the default  opt.set_min_compr_size(10240);   // not trying compressing file smaller than 10 kiB    my_backup.op_isolate("/tmp",   "CAT_diff_backup",   "dar",   opt); // the non default options set above     // have you noted? There is no libdar statistics field returned nor as argument.

    Restoring files from an archive

    Quite as simple as previously, here we use the archive::op_extractmethod:

     statistics op_extract(const path & fs_root,   const archive_options_extract & options,   statistics *progressive_report);

    • fs_root is the directory under which to restore the files and directory
    • options defines how and what to restore
    • progressive_report has already been seen several time previously

      // as we still do not have seen masks, we will restore all files contained in the backup
      // such mask would be provided to the
      // archive_options_extract::set_selection() and/or
      // to the archive_options_extract::set_subtree() methods
      // to precisely define what files to restore
       archive_options_extract opt;  opt.set_dirty_behavior(false, false); // dirty files are not restored    (void) my_backup.op_extract("/home/me/my_home_copy",   opt,   nullptr); // we have seen previously how to use statistics

    Merging archives

    Here we will need two archive objects open in read-mode and we will invoke a specific archive constructor passing these two objects as argument, once the constructor will have completed the merging operation will be done.

     archive(const std::shared_ptr<user_interaction> & dialog,   const path & sauv_path,   std::shared_ptr<archive> ref_arch1,   const std::string & filename,   const std::string & extension,   const archive_options_merge & options,   statistics * progressive_report);

    • dialog is will still be set to null pointer for now
    • sauv_path is the directory where to create the resulting merging archive
    • ref_arch1 is the first (and mandatory) archive, the second is optional and may be given to the options argument
    • >
    • filename is the resulting archive basename
    • extension as always should be set to "dar"
    • options is a set of optional parameters
    • progressive_report is as seen above the ability to have another thread showing progression info during the operation

      // assuming you have two backups:   // the first is /tmp/home_backup.*.dar   // the second is /var/tmp/system_backup.*.dar   // we will create /tmp/merged.*.dar as result of the merging   // of these two backups     // 1 - first things first: opening the first backup    libdar::archive_options_read opt;    opt.set_info_details(true);  opt.set_execute("echo 'about to read slice %p/%b.%N.%e with context %c'");  std::shared_ptr<libdar::archive> home_backup(new libdar::archive(nullptr, // this is user_interaction we'll see further   "/tmp",   "home_backup",   "dar",   opt));     // 2 - opening the second backup    std::shared_ptr<libdar::archive> system_backup(new libdar::archive(nullptr,   "/var/tmp",   "system_backup",   "dar",   opt);     // 3 - setting up the options for merging operation    libdar::archive_options_merge opt_merge;    opt_merge.set_auxiliary_ref(system_backup);   // while merging the second backup is optional, where from the use of option for it  opt_merge.set_slicing(1048576, 0); // all slice would have 1 MiB at most  opt_merge.set_compression(libdar::compression::bzip2);  opt_merge.set_keep_compressed(true);  opt_merge.set_user_comment("archive resulting of the merging of home_backup and system_backup");  opt_merge.set_hash_algo(libdar::hash_algo::sha512); // will generate on-fly hash file for each slice     // 4 - now performing the merging operation
       libdar::archive merged(nullptr, // still the user_interaction we will see further   "/tmp",   home_backup, // first back is mandatory, not part of options   "merged",   "dar",   opt_merge,   nullptr); // progressive_report, we don't use here

    Decremental backup

    Decremental backup is an operation that, from two full backups, an old and a recent one, creates a backward differential backup corresponding to the old full backup, which difference is based on the new full backup. In other words, instead of keeping two full backups, you can keep the latest and replace the oldest by its decremental counterpart. This will save you space while letting you restore as if you had the old full backup by restoring first the recent (full) backup then the decremental backup.

    Creating a decremental backup is exactly the same as creating a merging backup, you need just to set the archive_options_merge::set_decremental_mode() before proceeding to the merging. To avoid duplication we will just illustrate the last step of the previous operation modified for decremental backup:

      // creating the two read-only backup as for merging operation   // the only difference is that here both backups are mandatory  std::shared_ptr<libdar::archive> old_full_backup(...); // not detailing this part  std::shared_ptr<libdar::archive> new_full_backup(...); // not detailing this part     // setting up the options for a decremental operation  libdar::archive_options_merge opt_merge;    opt_merge.set_decremental_mode(true);  opt_merge.set_auxiliary_ref(new_full_backup);     // now performing the merging operation (here decremental backup)    libdar::archive merged(nullptr, // still the user_interaction we will see further   "/tmp",   old_full_backup,   "decremental_backup",   "dar",   opt_merge,   nullptr); // progressive_report we don't use here

    Archive repairing

    If an archive has been truncated due to lack of disk space and if sequential marks (aka tape marks) had not been disable, it is possible to rebuild sane archive beside this truncated one.

    We just need to invoke a specific libdar::archive constructor which form follows:

     archive(const std::shared_ptr<user_interaction> & dialog,   const path & chem_src,   const std::string & basename_src,   const std::string & extension_src,   const archive_options_read & options_read,   const path & chem_dst,   const std::string & basename_dst,   const std::string & extension_dst,   const archive_options_repair & options_repair);

    You should now be familiarized with the different types and variable uses. As you can note, this constructor takes in charge the work to read the damaged archive because if the archive is corrupted a normal constructor would fail, so you won't have to do it first. As always, this constructor will end only once the operation will have completed, that's to say at the end of the reparing.

      // assuming the archive /tmp/home_backup.*.dar is damaged   // and you want to have repaired archive as /tmp/home_backup_repaired.*.dar    libdar::archive repaired(nullptr, // still the user_interaction we have not yet seen   "/tmp"   "home_backup",   "dar",   archive_options_read(),   "/tmp",   "home_backup_repaired",   "dar",   archive_options_repair());     // we have not done fancy things with the two option classes, but we did above   // enough time for you get all the necessary information from the API reference   // documentation

    Looking at some details

    we have covered the different operations the class libdar::archive can be used for, still remains some concepts to view:

    • user_interaction
    • masks
    • how to cleanly interrupt an running libdar routine
    • how to known which compile-time feature has been activated

    Then we will see the three other more simple classes:

    • class database
    • class libdar_slave
    • class libdar_xform

    This will be the subject of the following chapters, but for now, maybe you remember that we had to initialize libdar before use, by calling libdar::get_version()? This routine also exists with arguments that will provide as its name suggests the libdar version:

    void get_version(U_I & major, U_I & medium, U_I & minor, bool init_libgcrypt = true);

    It is advised to use this form to fetch the libdar version major, medium and minor numbers to be sure the library you've dynamically linked with is compatible with the features you will be using:

    • The major number must be the same, because no compatibility is assured between two libdar versions of different major numbers.
    • While run-time compatibility is assured between medium numbers, the medium number must be greater or equal to the one used at compilation time to be sure that all the features you want are available in the libdar library you dynamically linked with.
    • Changes between minor versions correspond to bug fixes and does not to imply any API change, thus no constraints is present there (just note the presence of more bugs in lower numbers).

    If you use libgcrypt beside libdar in your application you should initialize libgcrypt and not let it be done by libdar the latest argument of this form should be set to false in that case, according to the libgcrypt documentation which indicates that libgcrypt should normally (always) be initialized directly from the application not from an intermediate library.

    Follows an example of test that can be performed while initializing libdar:

     U_I major, medium, minor;    libdar::get_version(major, medium, minor);    if(maj != libdar::LIBDAR_COMPILE_TIME_MAJOR ||   med < libdar::LIBDAR_COMPILE_TIME_MEDIUM)  {   std::cout << "libdar version we link with is too old for this code" << std::endl;   // throw an exception or anything else appropriate to that condition:   throw "something";  }

    Checking compile-time features activation

    Once we have called one of the get_version* function it is possible to access the list of features activated at compilation time thanks to a set of function located in the compile_time nested namespace inside libdar:

     void my_sample_function()  {   bool ea = libdar::compile_time::ea();   bool largefile = libdar::compile_time::largefile();   bool nodump = libdar::compile_time::nodump();   bool special_alloc = libdar::compile_time::special_alloc();   U_I bits = libdar::compile_time::bits();   // bits is equal to zero for infinint,   // else it is equal to 32 or 64 depending on   // the compilation mode used.     bool thread = libdar::compile_time::thread_safe();   bool libz = libdar::compile_time::libz();   bool libbz2 = libdar::compile_time::libbz2();   bool liblzo = libdar::compile_time::liblzo();   bool libxz = libdar::compile_time::libxz();   bool libcrypto = libdar::compile_time::libgcrypt();   bool furtive_read = libdar::compile_time::furtive_read();   // for details see the compile_time namespace in the API reference documentation  }

    User Interaction

    we have seen std::shared_pointer on class libdar::user_interaction previously but did not used this feature.

    Defining your own user_interaction class

    class libdar::user_interaction defines the way libdar interact with the user during an operation, like an archive creation, restoration, testing and so on. Only four types of interaction are used by libdar:

    void message(const std::string & message); void pause(const std::string & message); std::string get_string(const std::string & message, bool echo); secu_string get_secu_string(const std::string & message, bool echo);

    By default an inherited class of libdar::user_interaction, called libdar::shell_interaction, is used and implements these four type of exchange by mean of text terminal:

    • message() sends the std::string provided by libdar to stdout
    • pause() does the same and ask for the user to press either return or escape to answer yes or no
    • get_string() reads a string from stdin
    • get_secu_string() reads a string into a secu_string object from stdin too

    For a GUI you will probably not want stdin and stdout to be used. Instead of that you have the possibility to implement your own inherited class from user_interaction. This one should look like the following:

     class my_user_interaction: public libdar::user_interaction  {   protected:   // display of informational message   virtual void inherited_message(const std::string & message) override;     // display of a question and returns the answer from user as true/false   virtual bool inherited_pause(const std::string & message) override;     // display the message and returns a string from the user,   // with or without display what the user typed (echo)   virtual std::string inherited_get_string(const std::string & message, bool echo) override;     // same as the previous be the user provided string is returned as secu_string   virtual libdar::secu_string inherited_get_secu_string(const std::string & message, bool echo) override;  };

    Relying on the pre-defined user_interaction_callback class

    As an alternative to defining your own inherited class from libdar::user_interaction, libdar provides a class called user_interaction_callback which is an implementation of the user interaction, based on callback functions.

    You will need to implement four callback functions:

    using message_callback = void (*)(const std::string &x, void *context); using pause_callback = bool (*)(const std::string &x, void *context); using get_string_callback = std::string (*)(const std::string &x, bool echo, void *context); using get_secu_string_callback = secu_string (*)(const std::string & x, bool echo, void *context);

    Then you can create an libdar::user_interaction_callback object using this constructor:

     user_interaction_callback(message_callback x_message_callback,   pause_callback x_answer_callback,   get_string_callback x_string_callback,   get_secu_string_callback x_secu_string_callback,   void *context_value);

    Here follows an example of use:

     void my_message_cb(const std::string & x, void *context)  {   std::cout << x << std::endl;  }    bool void my_pause_cb(const std::string & x, void *context)  {   char a;     std::cout << x << endl;   std::cin >> a;   return a == 'y';  }    std::string my_string_cb(const std::string & x, bool echo, void *context)  {   // to be defined  }    libdar::secu_string my_secu_string_cb(const std::string & x, bool echo, void *context)  {   // to be defined  }     // eventually using a context_value that will be passed to the callback of the object  void *context_value = (void *)(& some_datastructure);    std::shared_ptr<libdar::user_interaction> my_user_interaction(new libdar::user_interaction_callback(my_message_cb,   my_pause_cb,   my_string_cb,   my_secu_string_cb,   context_value));

    You can also find the predefined classes libdar::user_interaction_blind which always says no in name of the user displays nothing and provide empty strings, as well as libdar::shell_interaction_emulator which given a user_interaction object send to it the formatted information as if it was a shell_interaction object, leading one to emulate libdar default behavior under any type of "terminal".

    IMPORTANT
    all libdar::user_interaction inherited classes provided by libdar are not designed to be manipulated by more than one thread at a time. The use of std::shared_ptr is only here to let the caller not have to manage such object and let libdar release it when no more needed or to let the caller reuse the same user_interaction object for a subsequent call to libdar which would not be possible if a std::unique_ptr was used instead.

    Now if you design your own user_interaction inherited class and provide them mecanism (mutex, ...) that allow them to be used simultaneously by several thread there is no issue to pass a such object as argument to different libdar object used by different threads running at the same time.

    Masks

    Mask are used to define which string will be considered and which will not. Libdar implements masks as several classes that all inherit from a virtual class named libdar::mask that defines the way masks are used (interface class). This class defines the bool mask::is_covered(const std::string & expression) const method which libdar uses to determine whether a given string matches or not a mask and thus whether the corresponding entry (filename, EA, directory path depending on the context), is eligible or not for an operation.

    Strings applied to masks may correspond to filename only, to full path or maybe to other things (like Extended Attributes). That's in the context where the mask is used that the string meaning take place, thing we will see further.

    There is several different basic masks classes you can use to build fairly complex masks, while it is possible you should not need to define you own mask classes, if the need arises, please contact libdar developer if you thing an additional class should take place beside the following ones:

    class name behavior
    class libdar::bool_mask boolean mask, either always true or false (depending on the boolean passed at its constructor), it matches either all or none of the submitted strings
    class libdar::simple_mask matches strings as done by the shell on the command lines (see "man 7 glob")
    class libdar::regular_mask matches regular expressions (see "man 7 regex")
    class libdar::not_mask negation of another mask (the mask given at construction time)
    class libdar::et_mask makes an *AND* operator between two or more masks
    class libdar::ou_mask makes the *OR* operator between two or more masks
    class lbdar::simple_path_mask matches whether the string to evaluate is subdirectory of, or is the directory itself that has been given at construction time.
    class libdar::same_path_mask matches if the string is exactly the given mask (no wild card expression)
    class libdar::exclude_dir_mask matches if string is the given string or a sub directory of it
    class libdar::mask_list matches a list of files defined in a given file

    Let's play with some masks:

      // all files will be elected by this mask  libdar::bool_mask m1(true);     // all string that match the glob expression "A*~" will match.   // the second argument of the constructor tell whether the match is case sensitive so here,   // any file beginning by 'A' or by 'a' and ending by '~' will be selected by this mask:  libdar::simple_mask m2(std::string("A*~"), false);     // m3 is the negation if m2. This mask will thus match   // any string that does not begin by 'A' or 'a' or does not finishing by '~'  libdar::not_mask m3(m2);     // this mask matches any string that is a subdirectory of "/home/joe"   // and any directory that contains /home/joe, meaning   // "/", "/home", "/jome/joe" and any subdirectory are matched.   // here, the second argument is also case sensitivity (so   // "/HoMe" will not be selected by this mask as we set it to "true".  libdar::simple_path_mask m4 = simple_path_mask("/home/joe",   true);     // now let's do some more complex things:   // m5 will now match only strings that are selected by both m2 AND m4  libdar::et_mask m5;  m5.add_mask(m2);  m5.add_mask(m4);     // we can make more interesting things like this, where m5 will select files   // that match m2 AND m4 AND m3. But as m3 is not(m2), m5 will never   // match any file... but the idea here is to see the flexibility of use:  m5.add_mask(m3);     // but we could do the same with an "ou_mask" and would get a silly   // counterpart of m1 (a mask that matches any files)  libdar::ou_mask m6;  m6.add_mask(m2);  m6.add_mask(m4);  m6.add_mask(m3);     // lastly, the NOT, AND and OR operation can be used recursively.   // Frankly, it's even possible to have masks referring each other!  libdar::not_mask m7(m6);  m6.add_mask(m7);   // now that's to you to build something that makes sense...

    The idea here is not to create object manually, but to link their creation to the action and choices the user makes from the user interface (Graphical User Interface of your application, for example)

    Now that you've seen the power of these masks, you should know that in libdar masks are used at several places:

    • A first place is to select files against their names (without path information) this the argument of the set_selection() method of libdar::archive_options_* classes. The mask here does not apply to directories.

    • A second place is to select files against their path+name and it applies here to all type of files including directories, this is the argument of the set_subtree() method of libdar::archive_options_* classes. So with it, you can prune directories, or in any other way restrict the operation to a particular subdirectory, as well as to a particular plain file for example.

      Important note: about this second mask, what your own mask will be compared to by libdar is the absolute path of the file under consideration. If you want to exclude /usr/local/bin from the operation whatever is the fs_root value (which correspond the -R option of dar) using here a libdar::simple_mask("/usr/local/bin") as argument of libdar::archive_options_*::get_subtree() will do the trick.

    An exception is the archive testing operation, which has no fs_root argument (because the operation is not relative to an existing filesystem), however the subtree argument exist to receive a mask for comparing the path of file to include or exclude from the testing operation. In this case the situation is as if the fs_root was set to the value "<ROOT>". For example, masks will be compared to "<ROOT>/some/file" when performing an archive test operation.

    Instead of using explicit string "<ROOT>" you can use libdar::PSEUDO_ROOT predifined std::string variable

    • A third place concerns Extended Attributes (EA), this is the argument of the set_ea_mask()method of libdar::archive_options_* classes. It is applied to the full EA name in the form <domain>.<name> where <domain> is any string value like but not limited to the usual "user" or "system" domains.
    • A fourth place concerns the file to compress or to avoid compressing. This is the argument of the set_compr_mask() method of libdar::archive_options_* classes. it is works the same as set_selection() methods seen above, based only to filename without any path consideration.
    • A fifth place concerns files that need to be prepared for backup, this is the argument of the set_backup_hook() methods of libdar::archive_option_create class. I has to be used the same as set_subtree(). For more about this feature see the backup-hook feature in dar man page (-<, -> and -= options).

    Aborting an Operation

    If the POSIX thread support is available, libdar will be built in a thread-safe manner, giving you the possibility to have several threads using libdar at the same time (but on different objects except concerning the libdar::statistics which can be shared between threads). You may then wish to interrupt a given thread. But aborting a thread form the outside (like sending it a KILL signal) will most of the time let some memory allocated or even worse can lead to dead-lock situation, when the killed thread was inside a critical section and had not got the opportunity to release a mutex. For that reason, libdar proposes a set of calls to abort any processing libdar call which is ran by a given thread.

      // next is the thread ID in which we want to have lidbar call canceled   // here for simplicity we don't describe the way the ID has been obtained   // but it could be for example the result of a call to pthread_self() as   // defined in <pthread.h> system header file  pthread_t thread_id = 161720;     // the most simple call is:  libdar::cancel_thread(thread_id);     // this will make any libdar call in this thread be canceled immediately     // but you can use something a bit more interesting:  libdar::cancel_thread(thread_id, false);     // this second argument is true for immediate cancellation and can be ommited in   // that case. But using false instead leads to a delayed cancellation,   // in which case libdar aborts the operation   // but produces something usable, espetially if you were performing a backup.     // You then get a real usable archive which only contains files saved so far, in place   // of having a broken archive which misses a catalogue at the end. Note that this   // delayed cancellation needs a bit more time to complete, depending on the   // size of the archive under process.

    As seen above, cancellation can be very simple. What now succeeds when you ask for a cancellation? Well, an exception of type Ethread_cancel is thrown. All along his path, memory is released and mutex are freed. Last, the exception appears to the libdar caller. So, you can catch it to define a specific comportment. And if you don't want to use exceptions a special returned code is used.

     try  {   libdar::archive my_arch(...);   ...  }  catch(libdar::Ethread_cancel & e)  {   ... do something specific when thread has been canceled;
     }

    Some helper routines are available to know the cancellation status for a particular thread or to abort a cancellation process if it has not yet been engaged.

     thread_t tid;     // how to know if the thread tid is under cancellation process?  if(libdar::cancel_status(tid))   std::cout << "thread cancellation is under progress for thread : "   << tid << std::endl;  else   std::cout << "no thread cancellation is under progress for thread : "   << std::endl;     // how to cancel a pending thread cancellation ?  if(libdar::cancel_clear(tid))   std::cout << "pending thread cancellation has been reset, thread "   << tid << " has not been canceled"   << std::endl;  else   std::cout << "too late, could not avoid thread cancellation for thread "   << tid   << std::endl;

    Last point, back to the libdar::Ethread_cancel exception, this class has two methods you may find useful, when you catch it:

     try  {   ... some libdar calls  }  catch(libdar::Ethread_cancel & e)  {   if(e.immediate_cancel())   std::cout << "cancel_thread() has been called with \"true\" as second argument"   << std::endl;   else   std::cout << "cancel_thread() has been called with \"false\" as second argument"   << std::endl;     U64 flag = e.get_flag();   ... do something with the flag variable...  }     // what is this flag stored in this exception?
      // You must consider that the complete definition of cancel_thread() is the following:   // void cancel_thread(pthread_t tid, bool immediate = true, U_64 flag = 0);   // thus, any argument given in third is passed to the thrown Ethread_cancel exception,   // value which can be retrieved thanks to its get_flag() method. The value given to this   // flag is not used by libdar itself, it is a facility for user program to have the possibility   // to include additional information about the thread cancellation.     // supposing the thread cancellation has been invoked by:  libdar::cancel_thread(thread_id, true, 19);   // then the flag variable in the catch() statement above would have received   // the value 19.

    Dar_manager API

    For more about dar_manager, please read the man page where are described in detail its available features.

    To get dar_manager features you need to use the class database. Most of the methods of the database class make use options. The same as what has been seen with class archive a auxiliary class is used to carry these options.

    Database object construction

    Two constructor are available. The first creates a brand-new but empty database in memory:

    database(const std::shared_ptr<user_interaction> & dialog);

    As seen for libdar::archive dialog can be set to a null pointer if the default interaction mode (stdin/stdout/stderr) suits your need.

    The second constructor opens an existing database from filesystem and stores its contents into memory ready for further use and actions:

     database(const std::shared_ptr<user_interaction> & dialog,   const std::string & base,   const database_open_options & opt);

    • dialog can be set to a null pointer or can point to an user_interaction object of your own
    • base is the path and filename of the database to read
    • opt is an object containing a few options. As seen with libdar::archive we can use an default temporary object to use default option
    Here follows simple examples of use of class database:

     std::shared_ptr<libdar::user_interaction> ui_ptr; // points to null  libdar::database base(ui_ptr);   // we have created an empty database (no archive in it) called "base"    libdar::database other(ui_ptr, // we can reuse it as it points to nullptr   "/tmp/existing_base.dmd",   libdar::database_open_options());   // we have created a database object called "other" which contains   // (in RAM) all information that were contained in the   // database file "/tmp/existing_base.dmd"    libdar::database_open_option opt;  opt.set_partial(true);  opt.set_warn_order(false);  libdar::database other2(ui_ptr,   "/tmp/existing_base.dmd",   opt);     // we have created yet another database object called "other2" which differs   // from "other" by the option we used. While "other" is a fully loaded   // database, "other2" is a partial database. This notion is explained   // below
    • database_open_options::set_partial(bool value) leads dar to only load the database header into memory, which is quicker than loading the full database. But some operation we will see bellow need fully loaded database, the other can work with both
    • database_open_options::set_partial_read_only(bool value) in addition to have only the header the archive is open in read-only mode which of course forbids any modification to the database but is even faster than just a partial read-write database. For just database listing this is perfectly adapted.
    • database_open_options::set_warn_order(bool value) avoid warning about ordering problem between archive

    In the following we will indicate whether a database operation can be applied to a partially loaded database or not. All operation can be applied to a fully loaded databse.

    Database's methods

    A database can be open in read-write mode partially loaded (still read-write) mode and last in partially loaded read-only mode. All operations are available in the first mode, but some are not in the second and even less in the third mode. We will detail which one are available in each mode:

    Available in partially loaded read-only mode

    • show_contents() : list the archives used to build the database
    • get_options() : list the options that will be passed to dar (as defined with the set_options() method)
    • get_dar_path() : return the path to dar (or empty string if relying on the PATH variable)

    Availabler in partially loaded read-write mode

    • all methods seen above
    • dump(...) : it is used to write back the database to a file.
    • change_name() : change the basename of the archive which index is given in argument
    • set_path() : change the path to the archive which index is given in argument
    • set_options(): change the default options to always pass to dar when performing restoration
    • set_dar_path() : specify the path to dar (use empty string to rely on the PATH variable)

    Available in fully loaded read-write mode

    • all methods seen above
    • add_archive() : add an archive to the database
    • remove_archive() : remove an archive from the database
    • set_permutation() : change archive relative order within the database
    • show_files() : list the files which are present in the given archive
    • show_version() : list the archive where the given file is saved
    • show_most_recent_stats() : compute statistics about the location of most recent file versions
    • restore() : restore a set of given files given in argument.

    Well, you might now say that as description this is a bit light for a tutorial, yes. In fact these call are really very simple to use, you can find a complete description in the API reference documentation. This documentation is built if doxygen is available and is put under doc/html after calling make in the source package. It is also available from dar's homepage.

    Dar_slave API

    dar_slave role is to read an archive while interacting with a dar process through a pair of pipes. Dar asks portion of the archive or information about the archive in the first pipe from dar to dar_slave. And dar_slave sends the requested information into the other pipe toward dar (embedded into an expected format).

    Since API 6.0.x, dar_slave has an API. It is implemented by the class libdar::libdar_slave. You need firs to create an object using the following constructor:

    libdar_slave(std::shared_ptr<user_interaction> & dialog, const std::string & folder, const std::string & basename, const std::string & extension, bool input_pipe_is_fd, const std::string & input_pipe, bool output_pipe_is_fd, const std::string & output_pipe, const std::string & execute, const infinint & min_digits);
    • dialog as seen for other libdar classes can be set to a null pointer for interaction on stdin and stdout
    • folder is the directory where resides the archive to read
    • basename is the basename of the archive
    • extension should always be set to "dar"
    • input_pipe_is_fd if set to true, the next argument is not the path to a named pipe but a number corresponding to a file descriptor open open in read mode
    • input_pipe is the path of a named pipe to read from. It can also be an empty string to use stdin as input pipe
    • out_pipe_is_fd if set to true, the next argument is not the path to a named pipe but a number corresponding to a file descriptor open in write mode
    • output_pipe is the path of a named pipe to write from. It can also be an empty string to use stdout as input pipe

    Once the object is created, you will need to call the libdar_slave::run() method which will end when the dar process at the other end will no more need of this slave:

     libdar::libdar_slave slave(nullptr,   "/tmp",   "first_backup",   "dar",   false,   "/tmp/toslave", // assuming this is an existing named pipe   false,   "/tmp/todar", // assuming this is also an existing named pipe   "echo 'reading slice %p/%b.%N.%e in context %c'",   0);    slave.run();     // once run() has returned, you can launch it again for another process   // it will continue to provide access to the /tmp/first_backup.*.dar archive

    Dar_xform API

    dar_xform creates a copy of a given archive modifying its slicing. it does not require decompressing nor deciphering the archive to do so. There is different constructor depending whether the archive is read from filesystem, from a named pipe of from a provided file descriptor

    Reading from a file

     libdar::libdar_xform(const std::shared_ptr<user_interaction> & ui,   const std::string & chem,   const std::string & basename,   const std::string & extension,   const infinint & min_digits,   const std::string & execute);
    • ui as seen so far it can be set to a null pointer for interaction on stdin and stdout
    • chem is the directory where resides the archive to read
    • basename is the basename of the archive
    • extension should always be set to "dar"
    • min_digits is the minimum number of digits slice number in filename have been created with (use zero if you don't know what it is)

    Reading from a named pipe

     libdar_xform(const std::shared_ptr<user_interaction> & dialog,   const std::string & pipename);
    • dialog as seen for other libdar classes, it can be set to nullptr
    • pipename complete path to the named pipe to read the archive from

    Reading from a file descriptor

     libdar_xform(const std::shared_ptr<user_interaction> & dialog,   int filedescriptor);
    • dialog same as above
    • filedescriptor is an read opened file descriptor to read the archive from

    Creating a single or multi-sliced archive on filesystem

    Once the libdar::libdar_xform object is created it can copy the referred archive to another location in another form thanks to one of the two libdar_xform::xform_to methods. There is not link between the constructor used and the libdar_xform::xform_to flavor used, any combination is possible.

     void xform_to(const std::string & path,   const std::string & basename,   const std::string & extension,   bool allow_over,   bool warn_over,   const infinint & pause,   const infinint & first_slice_size,   const infinint & slice_size,   const std::string & slice_perm,   const std::string & slice_user,   const std::string & slice_group,   libdar::hash_algo hash,   const libdar::infinint & min_digits,   const std::string & execute);

    Creating a single sliced archive toward a filedescriptor

     void xform_to(int filedescriptor,   const std::string & execute);

    Here follows an example of use. We will convert a possibly multi-sliced archive to a single slice one, generating a sha512 hash file on-fly.

     std::shared_ptr<libdar::user_interaction> ui_ptr; // points to null  libdar::libdar_xform transform(ui_ptr,   "/tmp",   "my_first_archive",   "dar",   0,   "echo 'reading slice %p/%b.%N.%e context is %c'");    transform.xform_to("/tmp",   "my_other_first_archive",   "dar",   false, // no overwriting allowed   true, // does not matter whether we warn or not as we do not allow overwriting   0, // no pause between slices   0, // no specific first slice   0, // no slicing at all (previous argument is thus not used anyway in that case)   "", // using default permission for created slices   "", // using default user ownership for created slices   "", // using default group ownership for created slices   libdar::hash_algo::sha512, // the hash algo to use (for no hashing use hash_none instead)   0, // min_digits ... not using this feature here where from we use "0"   "echo 'Slice %p/%b.%N.%e has been written. Context is %c'");

    Compilation & Linking

    Compilation

    All the symbols found in the libdar API defined from <dar/libdar.h>. So you should only need to include this header. If the header file is not located in a standard directory, in order to compile your code, you may need some extra flags to pass to the compiler (like -I/opt/...). The pkg-config tool can help here to avoid system dependent invocation:

     shell prompt > cat my_prog.cpp    #include <dar/libdar.h>    main()  {   libdar::get_version(...);   ...  }    shell prompt > gcc `pkg-config --cflags libdar` -c my_prog.cpp

    Linking

    Of course, you need to link your program with libdar. This is done by adding -ldar plus other library libdar can rely on like libz, libbzip2, liblzo or libgcrypt, depending on the feature activated at compilation time. Here too, pkg-config can provide a great help to avoid having system dependent invocation:

    shell prompt > gcc pkg-confg --libs libdar` my_prog.o -o my_prog

    Libdar's different flavors

    The compilation and linking steps described above assume you have a "full" libdar library. but beside the full (alias infinint) libdar flavor, libdar also comes in 32 and 64 bits versions. In these last ones, in place of internally relying on a special type (which is a C++ class called infinint) to handle arbitrary large integers, libdar32 relies on 32 bits integers and libdar64 relies on 64 bits integers (there are limitations which are described in doc/LIMITATIONS). But all these libdar version (infinint, 32bits, 64bits) have the same interface and must be used the same way, except for compilation and linking:

    These different libdar versions can coexist on the same system, they share the same include files. But the LIBDAR_MODE macro must be set to 32 or 64 when compiling or linking with libdar32 or libdar64 respectively, this macro changes the way the libdar headers files are interpreted by the compiler. pkg-config --cflags will set the correct LIBDAR_MODE, so you should only bother calling it with either libdar, libdar32 or libdar64 depending on your need: "pkg-confg --cflags libdar64" for example.

    shell prompt > cat my_prog.cpp #include <dar/libdar.h> main() {   libdar::get_version(...);   ... } shell prompt > gcc -c `pkg-config --cflags libdar64` my_prog.cpp shell prompt > gcc `pkg-config --libs libdar64` my_prog.o -o my_prog>

    and replace 64 by 32 to link with libdar32.

    dar-2.7.17/doc/FAQ.html0000644000175000017520000035577014767475172011410 00000000000000 DAR - Frequently Asked Questions  
    Dar Documentation

    DAR's - Frequently Asked Questions

    Questions:

    Answers:

    I restore/save all files but dar reported some files have been ignored, what are those ignored files?

    When restoring/saving, all files are considered by default. But if you specify some files to restore or save, all other files are "ignored", this is the case when using -P -X -I, -g -[ or -] options

    Dar hangs when using it with pipes, why?

    Dar can produce backups on its standard output, if you give '-' as basename. But it cannot read a backup from its standard input in direct access mode. To feed a backup to dar through pipes, you either need dar_slave and two pipes or use the sequential mode (--sequential-mode option, which gives slow restoration of a few files compared to the (default) direct access mode). To use dar with dar_slave over pipes in direct access mode (which is the  more efficient way to proceed), see the detailed notes or more precisely dar and ssh note.

    Why, when I restore 1 file, dar report 3 files have been restored?

    if you restore for example the file usr/bin/emacs dar will first restore usr (if the directory already exists, it will get its date and ownership restored, all existing files in that directory will however stay preserved), then /usr/bin will be restored, and last usr/bin/emacs will be restored. Thus 3 inodes have been restored or modified while only one file has been asked for restoration.

    While compiling dar I get the following message: g++: /lib/libattr.a: No such file or directory, what can I do?

    The problem comes from an incoherence in your distro (Redhat and Slackware seem(ed) concerned at least): Dar (Libtool) finds /usr/lib/gcc-lib/i386-redhat-linux/3.3.3/../../../libattr.la file to link with. This file defines where is located libattr static and dynamic libraries but in this file both static and dynamic libraries are expected to be found under /lib. While the dynamic libattr is there, the static version has been moved to /usr/lib. A workaround is to make a symbolic link:

    ln -s /usr/lib/libattr.a /lib/libattr.a
    I cannot find the binary package for my distro, where to look for?

    For any binary package, ask your distro maintainer to include dar (if not already done), and check on the web site of your preferred distro for a dar package

    Can I use different filters between a full backup and a differential backup? Would not dar consider some file not included in the filter to be deleted?

    Yes, you can. No, there is no risk to have dar deleting the files that were not selected for the differential backup. Here is the way dar works:

    During a backup process, when a file is ignored due to filter exclusion, an "ignored" entry is added to the catalogue. At the end of the backup, dar compares both catalogues, the one of reference and the new one built during the backup process, and adds a "detruit" entry (which means "destroyed" in French), when an entry of the reference is not present in the new catalogue. Thus, if an "ignored" is present no "detruit" will be added for that name. Then all "ignored" entries are removed and the catalogue is written at the end of the backup or archive.

    Once in action, dar makes all the system slower and slower, then it stops with the message "killed"! How to overcome this problem?

    Dar needs virtual memory to work. Virtual memory is the RAM + SWAP space. Dar memory requirement grows with the amount of file saved, not with the amount of data saved. If you have a few huge files you will have little chance to see any memory limitation problem. At the opposite, saving a plethora of files (either big or small), will make dar request an increasing amount of virtual memory. Dar needs this memory to build the catalogue (the contents) of the backup it creates. Same thing, for differential backup, except it also needs to load in memory the catalogue of the backup of reference, which most of the time will make dar using twice more memory when doing a differential backup than a full backup.

    Anyway, the solution is:

    1. Read the limitatons file to understand the problem and be aware of the limitations you will bring at step 3, bellow.
    2. If you can, add swap space to your system (under Linux, you can either add a swap partition or a swap file, which is less constraining but also a bit less efficient). Bob Barry provided a script that can give you a raw estimation of the required virtual memory (doc/samples/dar_rqck.bash), it was working well with dar 2.2.x but since then and the newly added features, the amount of metadata per file is variable: The memory requirement per file also depends on the presence and amount of Extended Attributes and Filesystem specific attributes, which changes from file to file.
    3. If this is not enough, or if you don't want/cannot add swap space, recompile dar giving --enable-mode=64 argument to the configure script. Note that since release 2.6.x this is the default compilation mode, thus you should be good now.
    4. If this is not enough, and you have some money, you can add some RAM on you system
    5. If all that fails, ask for support on the dar-support mailing-list.

    Last, there is always the workaround to make several smaller backups of the files to save. For example, making a backup for all that is in /usr/local, another one for all that is in /var and so on. These backups can be full or differential. The drawback is not big as you can store these backups side by side and use them at will. Moreover, you can feed a unique dar_manager database with all these different backups which will hide you the fact that there are several full and several differential backups concerning different set of files.

    I have a backup I want to change the size of slices?

    dar_xform is your friend!

    dar_xform -s <size> original_backup new_backup

    dar_xform will create a new backup file with the slices of the requested size, (you can also make use of -S option for the first slice). Note that you don't need to decrypt the backup, not dar will uncompress it, this is thus a very fast processing. See dar_xform man page for more.

    I have a backup in one slice, how can I split it in several slices?

    dar_xform is your friend!

    see just above.

    I have a backup in several slice, how can I stick all them in a single file?

    dar_xform is your friend!

    dar_xform original_backup new_backup

    dar_xform without -s option creates a single sliced backup. See dar_xform man page for more.

    I have a backup, how can I change its encryption scheme?

    The merging feature let you do that. The merging has two roles, putting in one backup the contents of two different backups, and at the same time filtering out some files you decided not to include into the resulting backup. The merging feature can take two but also only one backup as input. This is what we will use and without any filter to keep all saved files.

    • a single input (our original backup)
    • no file filtering (so we keep all the files)
    • keeping files compressed (no decompression/re compression) to speed up the process (-ak option)
    dar -+ new_backup -A original_backup -K "<new_algo>:new pass" -ak

    If you don't want to have password in clear on the command line (command that can be seen for example with top or ps by other users), simply provide "<algo>:" then dar will ask you on the fly for the password. If using blowfish you can then just provide ":" for the keys. Note that before release 2.5.0, -J option was needed to provide the password of the source backup. Since then, without -J option, dar will ask interactively for the password of the backup to read. You can still use -J option to provide the password from a DCF file and this way avoid dar interactively asking for it.

    Note that you can also change slicing of the backup at the same time thanks to -s and -S options:

    dar -+ new_backup -A original_backup -K ":" -ak -s 1G
    I have a backup, how can I change its compression algorithm?

    Same thing as above: we will use the merging feature:

    to use bzip2 compression:

    dar -+ new_backup -A original_backup -zbzip2

    to use gzip compression

    dar -+ new_backup -A original_backup -zgzip

    to use lzo compression, use -zlzo, for LZ4 use -zlz4, for zstd use -lzstd and so on.

    To use no compression at all, do no add any -z option or exclude all files from compression (-Z "*"):

    dar -+ new_backup -A original_backup

    Note that you can also change encryption scheme and slicing at the same time you change compression:

    dar -+ new_backup -A original_backup -zbzip2 -K ":" -J ":" -s 1G
    Which options can I use with which options?

    DAR provides seven commands:

    -c to create a new backup
    -x to extract files from a given backup
    -l to list the contents of a given backup
    -d to compare the contents of a backup with filesystem
    -t to test the internal coherence of a given backup
    -C to isolate a backup (extract its contents to a usually small file) or make a snapshot of the current filesystem
    -+ to merge two backups in one or create a sub backup from one or two other ones
    -y to repair a backup

    For each command listed above, here follows the available options (those marked OK):

    short option long option -c -x -l -d -t -C -+ -y
    -v --verbose OK OK OK OK OK OK OK OK
    -vs --verbose=s OK OK -- OK OK -- OK OK
    -b --beep OK OK OK OK OK OK OK OK
    -n --no-overwrite OK OK -- -- -- OK OK OK
    -w --no-warn OK OK -- -- -- OK OK OK
    -wa --no-warn=all -- OK -- -- -- -- -- --
    -A --ref OK OK -- OK OK OK OK OK
    -R --fs-root OK OK -- OK -- -- -- --
    -X --exclude OK OK OK OK OK -- OK --
    -I --include OK OK OK OK OK -- OK --
    -P --prune OK OK OK OK OK -- OK --
    -g --go-into OK OK OK OK OK -- OK --
    -] --exclude-from-file OK OK OK OK OK -- OK --
    -[ --include-from-file OK OK OK OK OK -- OK --
    -u --exclude-ea OK OK -- -- -- -- OK --
    -U --include-ea OK OK -- -- -- -- OK --
    -i --input OK OK OK OK OK OK OK --
    -o --output OK OK OK OK OK OK OK --
    -O --comparison-field OK OK -- OK -- -- -- --
    -H --hour OK OK -- -- -- -- -- --
    -E --execute OK OK OK OK OK OK OK OK
    -F --ref-execute OK -- -- -- -- OK OK OK
    -K --key OK OK OK OK OK OK OK OK
    -J --ref-key OK -- -- -- -- OK OK OK
    -# --crypto-block OK OK OK OK OK OK OK OK
    -* --ref-crypto-block OK -- -- -- -- OK OK OK
    -B --batch OK OK OK OK OK OK OK OK
    -N --noconf OK OK OK OK OK OK OK OK
    -e --empty OK -- -- -- -- OK OK OK
    -aSI --alter=SI OK OK OK OK OK OK OK OK
    -abinary --alter=binary OK OK OK OK OK OK OK OK
    -Q OK OK OK OK OK OK OK OK
    -aa --alter=atime OK -- -- OK -- -- -- --
    -ac --alter=ctime OK -- -- OK -- -- -- --
    -am --alter=mask OK OK OK OK OK OK OK --
    -an --alter=no-case OK OK OK OK OK OK OK --
    -acase --alter=case OK OK OK OK OK OK OK --
    -ar --alter=regex OK OK OK OK OK OK OK --
    -ag --alter=glob OK OK OK OK OK OK OK --
    -z --compression OK -- -- -- -- OK OK --
    -s --slice OK -- -- -- -- OK OK OK
    -S --first-slice OK -- -- -- -- OK OK OK
    -p --pause OK -- -- -- -- OK OK OK
    -@ --aux OK -- -- -- -- -- OK --
    -$ --aux-key -- -- -- -- -- -- OK --
    -~ --aux-execute -- -- -- -- -- -- OK --
    -% --aux-crypto-block -- -- -- -- -- -- OK --
    -D --empty-dir OK OK -- -- -- -- OK --
    -Z --exclude-compression OK -- -- -- -- -- OK --
    -Y --include-compression OK -- -- -- -- -- OK --
    -m --mincompr OK -- -- -- -- -- OK --
    -ak --alter=keep-compressed -- -- -- -- -- -- OK --
    -af --alter=fixed-date OK -- -- -- -- -- -- --
    --nodump OK -- -- -- -- -- -- --
    -M --no-mount-points OK -- -- -- -- -- -- --
    -, --cache-directory-tagging OK -- -- -- -- -- -- --
    -k --deleted -- OK -- -- -- -- -- --
    -r --recent -- OK -- -- -- -- -- --
    -f --flat -- OK -- -- -- -- -- --
    -ae --alter=erase_ea -- OK -- -- -- -- -- --
    -T --list-format -- -- OK -- -- -- -- --
    -as --alter=saved -- -- OK -- -- -- -- --
    -ad --alter=decremental -- -- -- -- -- -- OK --
    -q --quiet OK OK OK OK OK OK OK OK
    -/ --overwriting-policy -- OK -- -- -- -- OK --
    -< --backup-hook-include OK -- -- -- -- -- -- --
    -> --backup-hook-exclude OK -- -- -- -- -- -- --
    -= --backup-hook-execute OK -- -- -- -- -- -- --
    -ai --alter=ignore-unknown-inode-type OK -- -- -- -- -- -- --
    -at --alter=tape-marks OK -- -- -- -- -- OK --
    -0 --sequential-read OK OK OK OK OK OK -- --
    -; --min-digits OK OK OK OK OK OK OK OK
    -1 --sparse-file-min-size OK -- -- -- -- -- OK --
    -ah --alter=hole-recheck -- -- -- -- -- -- OK --
    -^ --slice-mode OK -- -- -- -- OK OK OK
    -_ --retry-on-change OK -- -- -- -- -- -- --
    -asecu --alter=secu OK -- -- -- -- -- -- --
    -. --user-comment OK -- -- -- -- OK OK --
    -3 --hash OK -- -- -- -- OK OK OK
    -2 --dirty-behavior -- OK -- -- -- -- -- --
    -al --alter=lax -- OK -- -- -- -- -- OK
    -alist-ea --alter=list-ea -- -- OK -- -- -- -- --
    -4 --fsa-scope OK OK -- OK -- -- OK --
    -5 --exclude-by-ra OK -- -- -- -- -- -- --
    -7 --sign OK -- -- -- -- OK OK OK
    -' --modified-data-detection OK -- -- -- -- -- -- --
    -{ --include-delta-sig OK -- OK -- -- OK -- --
    -} --exclude-delta-sig OK -- OK -- -- OK -- --
    -8 --delta OK -- OK -- -- OK -- --
    -6 --delta-sig-min-size OK -- OK -- -- OK -- --
    -az --alter=zeroing-negative-dates OK -- -- -- -- -- -- --
    -\ --ignored-as-symlink OK -- -- -- -- -- -- --
    -T --kdf-param OK -- OK -- -- OK -- --
    --aduc --alter=duc OK OK OK OK OK OK OK OK
    -G --multi-thread OK OK OK OK OK OK OK OK
    -j --network-retry-delay OK OK OK OK OK OK OK --
    -afile-auth --alter=file-authentication OK OK OK OK OK OK OK --
    -ab --alter=blind-to-signatures OK OK OK OK OK OK OK --
    -aheader --alter=header -- -- OK -- -- -- -- --

    Why dar reports corruption of the backup I have transfered with FTP?

    Dar backups are binary files, they must be transfered in binary mode when using FTP. This is done in the following way for the ftp command-line client :

    ftp <somewhere> <login> <password> bin put <file> get <file> bye

    If you transfer a backup (or any other binary file) in ascii mode (the opposite of binary mode), the 8th bit of each byte will be lost and the backup will become impossible to recover (due to the destruction of this information). Be very careful to test your backup after transferring back to you host to be sure you can delete the original file.

    Why DAR does save UID/GID instead of plain usernames and usergroups?

    In each file property is not present the name of the owner nor the name of the group owner, but instead are present two numbers, the user ID and the group ID (UID & GID in short). The /etc/password file associates to these numbers a names and some other properties (like the login shell, the home directory, the password, see also /etc/shadow). Thus, when you do a directory list (with the 'ls' command for example or with any GUI program for another example), the listing application used does open each directory, there it finds a list of name and a inode number associated, then the listing program fetchs the inode attributes for each file and looks among other information for the UID and the GID. To be able to display the real user name and group name, the listing application use a well-defined standard C library call that will do the lookup in /etc/password, eventually NIS system if configured and any other additional system, [this way applications have not to bother with the many system configuration possible, the same API interface is used whatever is the system], then lookup returns the name if it exist and the listing application displays for each file found in a directory the attributes and the user name and group name as returned by the system instead of the UID and GID.

    As you can see, the user name and group name are not part of any file attribute, but UID and GID *are* instead. Dar is a backup tool mainly, it does preserve as much as possible the file properties to be able to restore them as close as possible to their original state. Thus a file saved with UID=3 will be restored with UID=3. The name corresponding the UID 3 may exist or not,  may exist and be the same or may exist and be different, the file will be anyway restored in UID 3.

    Scenario with dar's way of restoring

    Thus, when doing backup and restoration of a crashed system you can be confident, the restoration will not interfere with the bootable system you have used to launch dar to restore your disk. Assuming you have UID 1 labeled 'bin' in your real crashed system, but this UID 1 is labeled 'admin' in the boot system, while UID 2 is labeled 'bin' in this boot system, files owned by bin in the system to restore will be restored under UID 1, not UID 2 which is used by the temporary boot system. At that time after restoration still running the from the boot system, if you do a 'ls' you will see that the original files owned by 'bin' are now owned by user 'admin'.

    This is really a mirage: in your restoration you will also restore the /etc/password file and other system configuration files (like NIS configuration files if they have been used), then at reboot time on the newly restored real system, the UID 1 will be backed associated to user 'bin' as expected and files originally owned by user bin will now been listed as owned by bin as expected.

    Scenario with plain name way of restoring

    If dar had done else, restoring the files owned by 'bin' to the UID corresponding to 'bin', these files would have been given UID 2 (the one used by the temporary bootable system used to launch dar). But once the real restored system would have been launched, this UID 2 would have become some other user and not 'bin' which is mapped to UID 1 in the restored /etc/password.

    Now, if you want to change some UID/GID when moving a set of files from one live system to another system, there is no problem if you are not restoring dar under the 'root' account. Other account than 'root' are usually not allowed to modify UID/GID, thus restored files by dar will have group and user ownership of the dar process, which is the one that has launched dar.

    But if you really need to move a directory tree containing a set of files with different ownership and you want to preserve these different ownership from one live system to another, while the corresponding UID/GID do not match between the two system, dar can still help you:

    • Save your directory tree on the source live system
    • From the root account in the destination live system do the following:
    • restore the backup content in a empty directory
    • change the UID of files according to the one used by the destination filesystem with the command: find /path/to/restored/backup -uid <old UID> -print -exec chown <new name> {} \; find /path/to/restored/backup -gid <old GID> -print -exec chgrp <new name> {} \; The first command will let you remap an UID to another for all files under the /path/to/restored/backup directory
      The second command will let you remap a GID to another for all files under the /path/to/restored/backup directory
    Example on how to globally modify ownership of a directory tree user by user

    For example, you have on the source system three users: Pierre (UID 100), Paul (UID 101), Jacques (UID 102) but on the destination system, these same users are mapped to different UID: Pierre has UID 101, Paul has UID 102 and Jacques has UID 100.

    We temporary need an unused UID on the destination system, we will assume UID 680 is not used. Then after the backup restoration in the directory /tmp/A we will do the following:

    find /tmp/A -uid 100 -print -exec chown 680 {} \; find /tmp/A -uid 101 -print -exec chown pierre {} \; find /tmp/A -uid 102 -print -exec chown paul {} \; find /tmp/A -uid 680 -print -exec chown jacques  {} \;

    which is:

    • change files of UID 100 to UID 680 (the files of Jacques are now under the temporary UID 680 and UID 100 is now freed)
    • change files of UID 101 to UID 100 (the files of Pierre get their UID of the destination live system, UID 101 is now freed)
    • change files of UID 102 to UID 101 (the files of Paul get their UID of the destination live system, UID 102 is now freed)
    • change files of UID 680 to UID 102 (the files of Jacques which had been temporarily moved to UID 680 are now set to their UID on the destination live system, UID 680 is no more used).

    You can then move the modified files to appropriated destination or make a new dar backup to be restored in appropriated place if you want to use some of dar's feature like for example only restore files that are more recent than those present on filesystem.

    Dar_Manager does not accept encrypted backups, how to workaround this?

    Yes, that's true, dar_manager does not accept encrypted backups. The first reason is that while dar_manager database cannot be encrypted this is not very fair to add to them encrypted backups. The second reason is because the dar_manager database should hold the key for each encrypted backup making this backup the weakest point in your data security: Breaking the database encryption would then provide access to any encryption key, and with original backup access it would bring access to data of any of the backup added to the database.

    To workaround this, you can proceed as follows:

    • isolate your encrypted backup into an unencrypted 'isolated catalogue': Do not use the -K option while isolating. Without -J option dar will prompt for the password of the encrypted archive. For automated process, you are encouraged to use a DCF file with restricted permissions and containing the '-J <key>' option to be passed for dar. The instruct dar to read that file thanks to -B option.
    • add these extracted catalogue to the dar_manager database of your choice,
    • change the name and path of the added catalogue to point to your real encrypted backups (-b and -p options of dar_manager).

    Note that as the database is not encrypted this will expose the backup file listing (not the file's contents) of your encrypted backups to anyone able to read the database, thus it is recommended to set restrictive permission to this database file.

    When will come the time to use dar_manager to restore some file, you will have to make dar_manager pass the key to dar for it be able to restore the needed files from the backup. This can be done in several ways: dar_manager's command-line, dar_manager database or dar.dcf file.

    1. dar_manager's command-line: simply pass the -e "-K <key>" to dar_manager . Note that this will expose the key twice: on dar_manager's command-line and on dar's command-line.
    2. dar_manager database: the database can store some constant command to be passed to dar. This is done using the -o option, or the -i option. The -o option exposes the arguments you want to be passed to dar because they are on dar_manager command-line. While the -i option, let you do the same thing but in an interactive manner, this is a better choice.
    3. A better way is to use a DCF file with restrictive permission. This one will receive the '-K <key>' option for dar to be able to read the encrypted backups. And dar_manager will ask dar to read this file thanks to the '-B <filename>' option you will have given either on dar_manager's command-line (-e -B <filename> ...) or from the stored option in the database (-o -B <filename>).
    4. The best way is let dar_manager pass the -K option to dar, but without password : simply passing the -e "-K :" option to dar_manager. When dar will get the -K option with the ":" argument, it will dynamically ask for the password and store it in a secured memory.
    How to overcome the lack of static linking on MacOS X?

    The answer comes from Dave Vasilevsky in an email to the dar-support mailing-list. I let him explain how to do:

    Pure-static executables aren't used on OS X. However, Mac OS X does have other ways to build portable binaries. HOWTO build portable binaries on OS X?

    First, you have to make sure that dar only uses operating-system libraries that exist on the oldest version of OS X that you care about. You do this by specifying one of Apple's SDKs, for example:

    export CPPFLAGS="-isysroot /Developer/SDKs/MacOSX10.2.8.sdk" export LDFLAGS="-Wl,-syslibroot,/Developer/SDKs/MacOSX10.2.8.sdk"

    Second, you have to make sure that any non-system libraries that dar links to are linked in statically. To do this edit dar/src/dar_suite/Makefile, changing LDADD to '../libdar/.libs/libdar.a'. If any other non-system libs are used (such as gettext), change the makefiles so they are also linked in statically. Apple should really give us a way to force the linker to do this automatically!

    Some caveats:

    • If you build for 10.3 or lower, you will not get EA support, and therefore you will not be able to save special Mac information like resource forks.
    • To work on both ppc and x86 Macs, you need to build a universal binary. For instructions, use Google -)
    • To make a 10.2-compatible binary, you must build with GCC 3.3.
    • These instructions won't work for the 10.1 SDK, that one is harder to use.
    Why cannot I test, extract file, list the contents of a given slice from a backup?

    Well this is due to dar's design. However you can list a whole backup and see in which slice(s) a file is located:

    # dar -l test -Tslice -g etc/passwd Slice(s)|[Data ][D][ EA ][FSA][Compr][S]|Permission| Filemane --------+--------------------------------+----------+----------------------------- 1 [Saved][-] [-L-][ 69%][ ] drwxr-xr-x etc 2 [Saved][ ] [-L-][ 63%][ ] -rw-r--r-- etc/passwd ----- All displayed files have their data in slice range [1-2] ----- #
    Why cannot I merge two isolated catalogues?

    Since version 2.4.0, isolated catalogues can also be used to rescue an corrupted internal catalogue of the backup it has been isolated from. For that feature be possible, a mecanism let dar know if an given isolated catalogue and a given backup correspond to the same contents. Merging two isolated catalogues would break this feature as the resulting backup would not match any real backup an could only be used as reference for a differential backup.

    How to use the full power of my multi-processor computer?

    Since release 2.7.0 it is possible to have dar efficiently using many threads at two independent levels:

    encryption
    You can specify the number of thread to use to cipher/decipher a backup. Note however that during tests done for 2.7.0 validation, it was observed that having more than two threads for encryption does not gives better results than using only two threads when compression is used, because most of the time compression is more CPU intensive than encryption (well all depends on the chosen algorithms, that's right).
    compression
    Before release 2.7.0 compression was done per file in streaming mode. In this mode to compress data you need to know the result of the compression of the data that is located before it, this brings good compression ratio but is impossible to parallelize. To be able to compress in parallel one need to split data in block, and compress blocks independently. There you can use a lot of threads up to the time when this is the disk I/O that is the slowest process. Adding more compression thread will not change the result. The drawback of compressing per thread is less the compression ratio that is slightly less good than in stream compression mode, than the memory requirement to hold a data block of clear data per thread and the compressed resulting data, times the number of threads. To avoid having any thread waiting for disk I/O, you even have to store a bit more memory block than the number of threads, this is managed by libdar.

    To activate multi-threading with dar, use the -G option, read the dar man page for all details about the way to define the number of encryption thread and the number of compression thread, as well as the compression block size to use.

    Is libdar thread-safe, which way do you mean it is?

    libdar is the part of dar's source code that has been rewritten to be used by external programs (like kdar). It has been modified to be used in a multi-threaded environment, thus, *yes*, libdar is thread-safe. However, thread-safe does not mean that you do not have to take some precautions in your programs while using libdar (or any other library).

    Care must thus be taken for two different threads not acting on the same variables/objects at the same time. This is however possible with the use of posix mutex, which would define a portion of code (known as a critical section) that cannot be entered by more than one thread at a time.

    A few objects provided by libdar API supports the concurrent access from several threads, read the API documentation for more.

    How to solve configure: error: Cannot find size_t type?

    This error shows when you lack support for C++ compilation. Check the gcc compiler has been compiled with C++ support activated, or if you are using gcc binary from a distro, double check you have installed the C++ support for gcc.

    Why dar became much slower since release 2.4.0?

    This is the drawback of new features!

    • Especially to be able to read dar backup through pipes in sequential mode, dar inserts so-called "escape sequence" (also referred as tape mark) to know for example when a new file starts. This way dar can skip to the next mark upon backup corruption or if the given file has not to be restored. However, if such a sequence of byte is found into a file's data, it must be modified not to collide with real escape sequences. This leads dar to inspect all data added to a backup for such sequence of byte, instead of just copying the data to the backup (eventually compressing and cyphering it).
    • The other feature that brings an important overhead is the sparse file detection mechanism. To be able to detect a hole in a file and store it into the backup, dar needs here too, to inspect each file's data.

    You can disable both of these features, using respectively the options -at option, which suppress "tape marks" (just another name for escape sequences), but does not allow the generated backup to be used in sequential read mode, and -1 0 option, which completely disables the sparse file detection. The execution time becomes back the same as the one of dar 2.3.x releases.

    Why dar became yet slower since release 2.5.0?

    This is again the drawback of new features!

    • The first feature that drain time is Filesystem Specific Attributes (FSA), as it requires new system calls for each new files to save. This has little impact when saving a lot of big files but become visible when saving a lot a tiny files or directories.
    • The second feature is the use of fadvise() system call that preserves cache usage. In other words dar tells the system it does not need anymore a file when it has been read (backup) or written (restoration) this has the advantage to reduce cache presure from dar to the benefit of other running process needs. The idea here is to preserves as much as possible a live operating system from being affected by a running backup relying on dar. The consequence is that if running dar a second time on the same set of file, with dar 2.4.x and below the data to save was most of the time in the cache which could lead to very fast execution, while with dar 2.5.x the data to save may have been flushed out of the cache by more important data for another application. This second time dar is run, the data has to be read again from disk which does not bring the same very fast execution as reading from cache.

    You can disable both of these features. The first can be disabled at compilation time giving --disable-fadvise to the ./configure script. The second option can be disabled at any time by adding the --fsa-scope=none option to dar. The execution time becomes back then the same as the one of dar 2.4.x releases.

    How to search for questions (and their answers) about known problems similar to mines?

    Have a look a the dar-support mailing-list archive and if you cannot find any answer to your problem feel free to send an email to this mailing-list describing your problem/need.

    Why dar tells me that he failed to open a directory, while I have excluded this directory?

    Reading the contents of a directory is done using the usual system call (opendir/readdir/closedir). The first call (opendir) let dar design which directory to inspect, the dar call readdir to get the next entry in the opened directory. Once nothing has to be read, closedir is called. The problem here is that dar cannot start reading a directory do some treatment and start reading another directory. In brief, the opendir/readdir/closedir system call are not re-entrant.

    This is in particular critical for dar as it does a depth lookup in the directory tree. In other words, from the root if we have two directories A and B, dar reads A's contents, the contents of its subdirectories, then once finished, it read the next entry of the root directory (which is B), then read the contents of B and then of each of its subdirectories, then once finished for B, it must go back to the root again, and read the next entry. In the meanwhile dar had to open many directories to get their contents.

    For this reason dar caches the directory contents (when it first meet a directory, it read its whole content and stores it in the RAM). This is only after, that dar decide whether to include or not a given directory. But at this point then, its contents has already been read thus you may get the message that dar failed to read a given directory contents, while you explicitly specify not to include that particular directory in the backup.

    Dar reports a SECURITY WARNING! SUSPICIOUS FILE what does that mean!?

    When dar reports the following message:

    SECURITY WARNING! SUSPICIOUS FILE <filepath>: ctime changed since backup of reference was done, while no inode or data changed

    You should be concerned by finding an explanation to the root cause that triggered dar to ring this alarm. As you probably know, a unix file has three (sometimes four) dates:

    1. atime is changed anytime you read the file's contents or write to it (this is the last access time)
    2. mtime is changed anytime you write to the file's data (this is the last modification time)
    3. ctime is changed anythime ou modify the file's attributes (the is the last change time)
    4. btime is never changed once a file has been created (this is the birth time or creation time), not all filesystem do provide it.

    In other words:

    • if you only read the data of file, only its atime will be updated1
    • if you write some data to a file, its ctime and mtime will change, atime will stay unchanged
    • if you change ownership, permission, extended attributes, etc. only ctime will change
    • if you write to a file and modify its atime or mtime to let think the file has not been read or modified, ctime will change in any case.

    Yes, the point is that in most (if not all) unix systems, over the kernel itself, user program can also manually set the atime and mtime manually to any arbitrary value (see the "touch" command for example), but to my knowledge, no system provides a mean to manually set the ctime of a file. This value cannot thus be faked.

    However, some rootkits and other nasty programs that tend to hide themselves from the system administrator use this trick and modify the mtime to become more difficult to detect. Thus, the ctime keeps track of the date and time of their infamy. However, ctime may also change while neither mtime nor atime do, in several almost rare but normal situations. Thus, if you are faced to this message, you should first verify the following points before thinking your system has been infected by a rootkit:

    • have you added or removed a hardlink pointing to that file and this file's data has not been modified since last backup?
    • have you changed this file's extended attributs (including Linux ACL and MacOS file forks) while file's data has not been modified since last backup?
    • have you recently restored your data and are now performing a differential backup taking as reference the backup used to restore that same data? Or in other words, does that particular file has just been restored from a backup (was removed by accident for example)?
    • have you just moved from a dar version older than release 2.4.0 to dar version 2.4.0 or more recent?
    • have you upgraded the package this file is part of since last backup?

    How to know atime/mtime/ctime of a file?

    • mtime is provided by the command: ls -l
    • atime is provided by the command : ls -l --time=atime
    • ctime is provided by the command : ls -l --time=ctime
    • the stat command provides all dates of a given file: stat <filename>
    Note:
    With dar version older than 2.4.0 (by default, unless -aa option is use) once a file has been read for backup, dar set back the atime to the value it had before dar read it. This trick was used to accomodate some programs like leafnode (NNTP caching program) that base their cache purging scheme on the atime of files. When you do a backup using dar 2.3.11 for example, file that had their mtime modified are saved as expected and their atime is set back to their original values (value they had just before dar read them), which has the slide effect to modify the ctime. If then you upgrade to dar 2.4.0 or more recent and do a differential backup, if that same file has not been modified since, dar will see that the ctime has changed while no other metadata did (user, ownership, group, mtime), thus this alarm message will show for all saved files in the last 2.3.11 backup made. The next differential backup made using dar 2.4.0 (or more recent), the problem will not show anymore.

    Well, if you cannot find an valid explanation from the one presented above, you'd better consider that your system has been infected by a rootkit or a virus and use all the necessary tools (see below for examples) to find some evidence of it.

    Last point, if you can explain the cause of the alarm and are annoyed by it (you have hundred of files concerned for example) you can disable this feature adding the -asecu switch to the command-line.

    1 atime may also not be updated at all if filesystem is mounted with relatime or noatime option.

    Can dar help copy a large directory tree?

    The answer is "yes" and even for more than one reason:

    1. Many backup/copy tools do not take care of hard linked inode (hard linked plain files, named pipes, char devices, block devices, symlinks)... dar does,
    2. Many backup/copy tools do not take care of sparse files... dar does,
    3. Many backup/copy tools do not take care of Extended Attributes... dar does,
    4. Many backup/copy tools do not take care of Posix ACL (Linux)... dar does,
    5. Many backup/copy tools do not take care of file forks (MacOS X)... dar does,
    6. Many backup/copy tools do not take any precautions while working on a live system... dar does.

    Using the following command will do the trick without relying on temporary file or backup:

    dar -c - -R <srcdir> --retry-on-change 3 -N | dar -x - --sequential-read -N -R <dstdir>

    <srcdir> contents will be copied to <dstdir> both must exist before running this command, and <dstdir> should be an empty dir.

    Here is an example: we will copy the content of /home/my to /home2/my. first we create the destination directory, then we run dar

    mkdir /home2/my dar -c - -R /home/my --retry-on-change 3 | dar -x - --sequential-read -R /home2/my

    The --retry-on-change let dar retry the copy of a file up to three times if that file has changed at the time dar was reading it. You can increase this number at will. If a file fails to be copied correctly after more than the allowed retry, a warning is issued about that file and it is flagged as dirty in the data flow, the second dar command will then ask you whether you want it to be restored (here copied) on not.

    "piping" ('|' shell syntax) the first dar's output to the second dar's input makes the operation not requiering any temporary storage, only virtual memory is used to perform this copy. Compression is thus not requested as it would only slow down the whole process.

    last point, you should compare the copied data to the original one, before removing it, as no backup file has been dropped down to filesystem. This can simply be done using: diff -r <srcdir> <dstdir>

    But, no, diff will not check extended Attributes, File Forks or Posix ACL, hard linked inodes, etc. If you want a more controlable way of copying a large directory, simply use dar with a real backup file, compare the backup toward the original filesystem, restore the backup contents to its new place, and compare the restored filesystem toward the original backup.

    Any better idea? Feel free to contact dar's author for an update of this documentation!

    Does dar compress per file or the whole backup?

    Dar uses compression (gzip, lzo, bzip2, xz/lzma, zstd, lz4, ...) with different level of compression (1 for quick but low compression up to 9 for best compression but slower execution) on a file by file basis. I other words, the compression engine is reset for each new file added into the backup. When a corruption occurs in a file like a compressed tar backup, it is not possible to decompress the data passed that corruption, with tar you loose all files stored after such data corruption.

    Having compression per file has instead the advantage to only impact one file inside the backup and all files that are stored before or after such data corruption can still be restored from that corrupted backup. Compressing per file opens the possibility to not compress all files in the backup, in particular already compressed files (like *.jpeg, *.mpeg, some *.avi files and of course the *.gz, *.bz2 or *.lzo files). Avoiding compressing already compressed files save CPU cycles (in other words it speeds up backup process time). And while compressing an already compressed file takes time for nothing, it also leads to require more storage space than if that same file was not compressed a second tim

    The drawback is that the overall compression ratio is slightly less good.

    How to activate compression with dar? Use the --compression option (or -z in short), telling the algorithm to use and the compression level (--compression=bzip2:9 or -zgip:7 for example), you may not mention the compression ratio (which default to 9) and even not mention the compression algorithm which default to gzip. Thus -z or -zlzo are correct.

    To select file to compress or not compress, several options are available: --exclude-compression (or -Z in short --- the uppercase Z here) --include-compression (or -Y in short). Both take as argument a mask that based on their names define files that have to be compressed or not to be compressed. For example -Z "*.avi" -Z "*.mp?" -Z "*.mpeg" will avoid compressing MPEG, MP3, MP2 and AVI files. Note that dar provides in its /etc/darrc default configuration file, a long list of -Z options to avoid compressing most common compressed files, that you can activate by simply adding compress-exclusion on dar command-line.

    In addition to excluding/including files from compression based on their name, you can also exclude small files (for which compression ratio is usually poor) using the --mincompr option which takes a size as argument: --mincompr 1k will avoid compressing files which size is less than or equal to 1024 bytes. You should find all details about these options in dar man page. Check also the -am and -ar options to understand how --exclude-compressionand --include-compression interact with each other, or how to use regular expressions in place of glob expressions in masks.

    What slice size can I use with dar?

    The minimum slice size is around 20 bytes, but you will only be able to store 3 to 4 bytes of information per slice, due to the slice header that need around 15 bytes in each slice (this vary depending on options used and may increase in future backup version format). But there is no maximum slice size! In other words you can give to -s and -S options an as long as required positive integer, thanks to its internal own integer type named "infinint" dar is able to handle arbitrarily large integers (file offset, file size, etc.).

    You can make use of suffixes like 'k' for kilo, M for mega, G for giga etc... (all suffixes are listed here) to simplify your work. See also the -aSI and -abinary options to swap meaning between ko (= 1000 octets) kio (= 1024 octets).

    Last point dar/libdar can be compiled using the --enable-mode=64 option given to ./configure while building dar (this is the default since release 2.6.0). This replaces the "infinint" type by 64 bits integers, for better performances and reduced memory usage. However this has some drawback on backup size and dates. See the limitations for more details.
    Since release 2.6.0 the default being the 64 bits mode, to have dar/libdar using infinint one need to use the following option ./configure --enable-mode=infinint.

    Is there a dar fuse filesystem?

    You can find several applications relying on dar or directly on libdar to manage dar backup, these are referred here as external software because they are not maintained nor have been created by the author of dar and libdar. AVFS is such external software that provides a virtual file system layer for transparently accessing the content of backups and remote directories just like local files.

    how dar compares to tar or rsync

    All depends on the use case you want to address. A benchmark has been setup to match the performances, features and behaviors or dar, rsync and tar in regard to a set of common use cases. Hopefully this will help you answer this question.

    Why when comparing a backup with filesystem, dar does not report new files found on filesystem?

    Backup comparison (-d option) is to be seen as a step further than backup testing (-t option) where dar checks the backup internal structure and usability. The step further here is not only to check that each part of the backup is readable and has a correct associated CRC but also that it matches what is present on filesystem. So yes, if new files are present on filesystem, nothing has to be reported. If a file changed, dar reports that the file does not match what's in the backup, if a file is missing dar cannot compare it with filesystem and reports an error too.

    So you want to know what has changed on your filesystem? No problem, do a differential backup! OK, you don't want to have a new backup or do not have the space for that, just output the backup to /dev/null and request on-fly isolation as follows:

    dar -c - -A <ref backup> -@ <isolated> ... other options ... > /dev/null

    <ref backup>
    is the backup of reference or an isolated catalogue
    <isolated>
    is the name of the isolated catalogue to produce.

    Once the operation has completed, you can list the isolated catalogue using the following command:

    dar -l <isolated> -as

    It will give you the exact difference between your current filesystem and the filesystem at the time the <ref backup> was done: modified files and new files are reported with [inref] for either data EA or both, while deleted files are reported by [--- REMOVED ENTRY ----] information, followed by the estimated removal date and the type of the removed file ([-] for plain file, [d] for directory, and so on. More details in dar man page for listing command).

    Why dar does not automatically perform delta difference (aka rsync increment)?

    Because delta different is subject in theory to checksum collision (but it is very unprobable though), which could lead a new version of a file being seen the same as an older one while some changes took place in it. A second reason is to take care of users preference, that do not want having this feature activated by default. Well, now, activating delta difference with dar is quite simple and flexible, see note.

    Why do dar reports truncated filenames under Windows, especially with cyrillic filenames?

    Dar/libdar has been first developer for Linux. It has been later ported to many other operating systems. For Unix-like system (FreeBSD, Solaris, ...), it can run as a native program by just recompiled it for the target OS and processor. For Windows system, it cannot because Unix and Windows systems do not provide the same system calls at all. The easiest way to have dar running under Windows was to rely on Cygwin, which translates the Unix system calls to Windows system calls. However Cygwin brings some limitations. One of them is that it cannot provide filenames longer than 256 bytes, while today's Windows can have much longer filenames.

    What the point with cyrillic filenames? Cyrillic characters unlike most latin ones are not stored as a single byte, they usually use several bytes per character, thus this maximum file size is reached much quicker than with latin filenames, but the problem also exists with them.

    The consequence is that when dar reads a directory that contains a large filename, the Cygwin layer is not able to provide it entierly: the filename is truncated. When dar wants to read information about that filename most of the time such truncated filename does not exists and dar reports the message from the system that this file does not exists (which might sound strange from user point of view). Since release 2.5.4 dar reports instead that filename has been truncated and that it will be ignored.

    I have a 32 bits windows system, which binary package can I to use?

    Up to release 2.4.15 (including) the dar/libdar binaries for windows were built on a 32 bits windows (XP) system. After that release, binaries for windows have been built using a 64 bits windows system (7, now 8 and probably 10 soon). Unfortunately, the filename of the binary packages for windows do not reflect that change and have still been labeled "i386" while included binaries do no more supporting i386 CPU family (which are 32 bits CPU). This is an oversight that has been unseen until Adrian Buciuman's remark in dar-support mailing-list September 23d, 2016. In consequence after that date binary packages for windows will receive an additional field corresponding to the windows flavor they have been built against.

    Some may still need 32 bits windows binaries of dar, unfortunately I have no more access to such system, but if you have such windows ISO image and valid license to give me, I could install it into a virtual machine and provide binary packages for 32 bits too.

    Until then, you can build yourself the binary for windows. Here follows the recipe:

    install Cygwin on windows including at least the following packages:

    • clang C/C++ compiler
    • cygwin devel
    • doxygen
    • gettext-devel
    • liblzo2-devel
    • libzzip-devel
    • libgpgme-devel
    • librsync-devel (starting future release 2.6.0)
    • make
    • tcsh
    • zip
    • upx

    Then get the dar source code and extract its content (either using windows native tools or using tar under cygwin) For clarity let's assuming you have extracted dar source package for version x.y.z into C:\Temp directory, thus you now have the directory C:\Temp\dar-x.y.z

    Run a cygwin terminal and "cd" into that directory:

    cd /cygdrive/c/Temp/dar-x.y.z

    In the previous command, note that from within a cygwin shell, the path use slashes not windows backslashes ; note also the 'c' is lowercase while windows shows upper case letter for drives...

    But don't worry, we are almost finished, run the following script:

    misc/batch_cygwin x.y.z

    starting release 2.5.7 the syntax will change / has changed

    misc/batch_cygwin x.y.z win32

    the new "win32" or "win64" field will be used to label the zip package containing the dar/libdar binary for windows, that's up to you to choose the value corresponding to your OS 32/64 bits flavor.

    At the end of the process you will get a dar zip file for windows in C:\Temp\dar-x.y.z directory.

    Feel free to ask for support on dar-support mailing-list if you enconter any problem building dar binary for windows, this FAQ will be updated accordingly.

    Path slash and back-slash consideration under Windows

    The paths given to dar's arguments and options must respect the UNIX way (use slashes "/" not back slashes "\" as it ought to be under Windows) thus for example you have to have to use /temp in place of \temp

    Moreover, drive letters cannot be used the usual way, like c:\windows\system32. Instead you will have to give the following path /cygdrive/c/windows/system32. As you see the /cygdrive directory is a virtual directory that has all the drives as children directories:

    Here is a more global example:

     c:\dar_win-1.2.1\dar -c /cygdrive/f/tmp/toto -s 2G -z1 -R "/cygdrive/c/My Documents"     ^ ^ ^ ^ ^   | | | | |   --------------- ---------------------------   here use anti-slash but here we use slash   as usually under in arguments given to dar   windows to point   the command Under Windows, which directory corresponds to /

    When running dar from a windows command-line (thus not from cygwin environement), dar's root directory is the parent directory of the one holding the dar.exe file. This does not mean that you cannot have dar backing up anything outside this directory (you can thanks to the /cygdrive/... path alias seen above), but when dar looks for darrc it looks using this parent directory as the "/" root one.

    Since release 2.6.14, the published dar packages for Windows are configured and built in such a way that dar.exe uses the provided darrc file located in the etc sub-directory. So darrc is now usable, out of the box. However if you rename the directory where dar.exe is located, which name is something like dar64-x.y.z-win64, the dar.exe binary will still look for a darrc at /dar64-x.y.z-win64/etc/darrc, taking as root directory the parent directory of the directory where it resides. You can still then explicitely rely on it by mean of a -B option pointing to the modified path where the darrc is located.

    Under Windows, which directory corresponds to $HOME ?

    There is no such HOME variable in windows by default, however John Slattery on dar-support mailing-list reported that if you set such variable in the command-line prompt dar will look for its .darrc at the path pointed to by this HOME variable. For example:

      > set HOME=/cygdrive/c/users/john   or   > set HOME=c:\users\john   > cd c:\program files\dar64-2.7.14-win64   > .\dar.exe ....
    lzo compression is slower with dar than with lzop, why?

    when using the "lzo" compression algorithm, dar/libdar always uses the algorithm lzo1x_999 with the compression level requested (from 1 to 9) as argument. Dar thus provides 9 different compression/speed levels with lzo.

    In the other hand, as of today (2017) lzop, the command line tool, uses the very degradated lzo algorithm known as lzo1x_1_15 for level 1 and the intermediate lzo1x_1 algorithm for levels from 2 to 6, which makes levels 2 to 6 totally equivalent from the lzop program point of view. Last, compression levels 7 to 9 for lzop uses the same lzo1x_999 algorithm as what dar/libdar uses, which is the only algorithm of the lzo family that makes use of a compression levels. In total lzop only provides 5 different compression levels/algorithms only.

    So now, you know why dar is slower than lzop when using lzo compression at level 1 to 6. To get to equivalent feature as lzop provides for level 1 and 2-6, dar/libdar provides two additional lzo-based compression algorithms: lzop-1 and lzop-3. As you guess, lzop-1 uses the lzo1x_1_15 algorithm as lzop does for its compression level 1, and lzop-3 uses the lzo1x_1 algorithm as lzop does for its compression levels 2 to 6. For both lzop-1 and lzop-3 algorithms, the compression level is not used, you can keep the default or change its value this will not change dar behavior.

    compression level
    for lzop
    algorithm for dar compression level
    for dar
    lzo algorith used
    1 lzop-1
    -
    lzo1x_1_15
    2
    lzop-3
    -
    lzo1x_1
    3
    lzop-3 -
    lzo1x_1
    4
    lzop-3 -
    lzo1x_1
    5
    lzop-3 -
    lzo1x_1
    6
    lzop-3 -
    lzo1x_1
    -
    lzo
    1
    lzo1x_999
    -
    lzo
    2
    lzo1x_999
    -
    lzo
    3
    lzo1x_999
    -
    lzo
    4
    lzo1x_999
    -
    lzo
    5
    lzo1x_999
    -
    lzo
    6
    lzo1x_999
    7
    lzo
    7
    lzo1x_999
    8
    lzo
    8
    lzo1x_999
    9
    lzo
    9
    lzo1x_999

    What is libthreadar and why libdar relies on it?

    libthreadar is a wrapping library of the Posix C threads. It was originally part of webdar a libdar based web server project, but as this code became necessary also inside libdar, all this thread relative classes have been put into a separated library called libthreadar, that today both webdar and libdar rely upon.

    dar/libdar rely on libthreadar to manage several threads inside libdar, which is necessary to efficiently implement the remote repository feature based on libcurl (available starting release 2.6.0).

    Why not using boost library or the thread suppport brought by C++11?

    Because first no complier implemented C++11 at the time webdar was started and second boost thread was not found to be adapted to the need for the following reasons:

    • I wanted a more object oriented approach than passing a function to be ran into a separated thread as provided by boost/C++11 interface, where from the pure virtual class libthreadar::thread that let you create inherited class from.
    • I wanted to avoid functions/methods with multiple parameters, as it has shown in the past with libdar to be a source of problem when it comes to backward compatibily while adding new features. Instead the inherited class can provide as many different methods to setup individual parameters before the thread is run()
    • As a consequence, another need was to be able to set an object before the thread is effectively run, the C++ object existence need not to match the thread existence, in other words the object shall be created first and the thread run() afterward. Of course the destruction of a thread object would kill the thread it is wrapping. The other advantage doing that way was the possibility to re-run() a thread from the same object once a first thread had completed eventually modifying some parameters through the method provided by the inherited class from libthreadar::thread
    • Last but not least, I wanted to have an exception thrown from within a thread and not caught up to the global thread function (thus leading the thread to end), to be kept over the thread existance and relaunched into the thread calling the join() method for that object. Thus avoiding having a coherent treatment of errors using C++ exception when thread were used.

    libthreadar does all this and is a completely independant piece of software from both webdar and dar/libdar. So you can use it freely (LGPLv3 licensing) if you want. As all project I've been published, it is documented as much as possible, feedback is always welcome of something is not clear, wrong or missing.

    libthreadar source code can be found here, documentation is available in source package as well as online here

    I have sftp pubkey authentication working with ssh/sftp, how to have dar using too this public key authentication for sftp?

    The answer is as simply as adding the following option while calling dar: -afile-auth

    Why not doing pubkey by default and falling back to password authentication?

    First this is by choice, because -afile-auth also uses ~/.netrc even when using sftp. Second it could be possible to first try public key authentication and falling back to password authentication, but it would require libdar to first connect, eventually failing if pubkey was not provisionned or wrong then connecting again asking user for password on command line. I seems more efficient doing else: file authentication when user ask to to so, password authentication else. The counterpart is not huge for user (you can add -afile-auth in your ~/.darrc and forget about it).

    I Cannot get dar to connect to remote server using SFTP, it fails with SSL peer certificate or SSH remote key was not OK

    This may be due to several well known reasons:

    • dar/libdar cannot find the known_hosts file
    • if using key authentifcation instead of password, dar/libdar cannot find the private key file
    • if using key authentifcation instead of password, dar/libdar cannot find the public key file
    • You have an outdate version of libssh2 or libcurl library and lack support for ecdsa host keys

    How to workaround?

    For the three first cases, you can make use of environment variable to change the default behavior:

    DAR_SFTP_KNOWNHOSTS_FILE DAR_SFTP_PUBLIC_KEYFILE DAR_SFTP_PRIVATE_KEYFILE

    They respectively default to:

    $HOME/.ssh/known_hosts $HOME/.ssh/id_rsa.pub $HOME/.ssh/id_rsa

    Changing them accordingly to your need is done before running dar from the shell, for example if you use sh or bash:

    export DAR_SFTP_KNOWNHOSTS_FILE=~/.ssh/known_hosts_alternative # then use dar as expected dar -c sftp://.... dar -t sftp://...

    if you use csh or tcsh:

    setenv DAR_SFTP_KNOWNHOSTS_FILE ~/.ssh/known_hosts_alternative # then use dar as expected dar -c sftp://... dar -t sftp://...

    For the fourth and last case, the thing is more tricky:

    First, if you don't already know what the known_hosts file is used for:

    It is used by ssh/sftp to validate that the host you connect to is not a pirate host trying to put itself between you and the real sftp/ssh server you intend to connect to. Usually the first time you connect to an sftp/ssh server you need to validate the fingerprint of the key received from the server (checking by another mean like phone call to the server's admin, https web browsing to the server page, and so on). When you validate the host key the first time, this adds a new line in known_hosts file in order for ssh/sftp client to automatically check the next time you connect that the host is still the correct one.

    The known_hosts file is usually located in your home directory at ~/.ssh/known_hosts and looks like this:

    asteroide.lan ecdsa-sha2-nistp256 AAAAE2V... esxi,192.168.5.20 ssh-rsa AAAAB3N... 192.168.6.253 ssh-rsa AAAAB3N...

    Each line concerns a different sftp/ssh server and contains three fields

    <hostame or IP>
    this is the server we have already connected to
    <host-key type>
    this is the type of key
    <key>
    this is the public key the server has sent the first time we connected

    We will focus on the second field.

    dar/libdar relies on libcurl for networking protocol interaction, which in turn relies on libssh2. Before libssh2 1.9.0 only rsa host key were supported leading to this message as soon as the known_hosts file contained a non-rsa host key (even another host listed in the known_hosts file than the one we tend to connect). As of December 2020, if 1.9.0 has now support for addition host key types (ecdsa and ed25519) libcurl does not yet leverage this support and the problem persists. I'm confident that things will be updated soon for this problem to be solved in a few months.

    In the meantime, several options are available to workaround that limitation:

    1. disable known_hosts checking, by setting the environment variable DAR_SFTP_KNOWNHOSTS_FILE to an empty string. Libdar will then not ask libcurl/libssh2 to check for known hosts validity, but this is not a recommend option! because it opens the door to man-in-the-middle attacks.
    2. copy the known_host file to ~/.ssh/known_host_for_libssh2 and remove from this copy all the lines corresponding to host keys that are not supported by libssh2, then set the DAR_SFTP_KNOWNHOSTS_FILE variable to that new file. This workaround is OK only if the non supported host key are not the one you intend to have dar communcating with...
    3. replace the the host key of the ssh/sftp server by an ssh-rsa one, OK, this will most probably imply you to have root permission on the remote ssh/sftp server... which is not possible when using public cloud service over Internet.
    Cannot open catalogue: Cannot handle such a too large integer. What to do?

    Unless using dar/libdar built in 32 bits mode, you should not meet this error message from dar unless exceeding the 64 bits integer limits. To know which intergers type dar relies on (infinint, 32 bits or 64 bits) run dar -V and check the line Integer size used:

    # src/dar_suite/dar -V dar version 2.7.0_dev, Copyright (C) 2002-2020 Denis Corbin Long options support : YES Using libdar 6.3.0 built with compilation time options: gzip compression (libz) : YES bzip2 compression (libbzip2) : YES lzo compression (liblzo2) : YES xz compression (liblzma) : YES zstd compression (libzstd) : YES lz4 compression (liblz4) : YES Strong encryption (libgcrypt): YES Public key ciphers (gpgme) : YES Extended Attributes support : YES Large files support (> 2GB) : YES ext2fs NODUMP flag support : YES Integer size used : 64 bits Thread safe support : YES Furtive read mode support : YES Linux ext2/3/4 FSA support : YES Mac OS X HFS+ FSA support : NO Linux statx() support : YES Detected system/CPU endian : little Posix fadvise support : YES Large dir. speed optimi. : YES Timestamp read accuracy : 1 nanosecond Timestamp write accuracy : 1 nanosecond Restores dates of symlinks : YES Multiple threads (libthreads): YES (1.3.1) Delta compression (librsync) : YES Remote repository (libcurl) : YES argon2 hashing (libargon2) : YES compiled the Jan 7 2021 with GNUC version 8.3.0 dar is part of the Disk Backup suite (Release 2.7.0_dev) dar comes with ABSOLUTELY NO WARRANTY; for details type `dar -W'. This is free software, and you are welcome to redistribute it under certain conditions; type `dar -L | more' for details.

    If you read "infinint" and see the above error message from dar, thanks to report a bug this should never occur. Else the problem appear when using dar before release 2.5.13 either at backup creation time when dar met a file with a negative date, or at backup reading time, reading a backup generated by dar 2.4.x or older and containing a file with a very distant date in the future thing dar 2.4.x and below recorded when the system returned a negative date for a file to save.

    What is a negative date? Date of files are recorded un "unix" time, that's to say the number of second elapsed since the beginning of year 1970. A negative date is means a date before 1970, which should normally not be met today because the few computer that existed at that time had not such way of storing dates nor the same files and filesystems.

    However for some reasons such negative dates can be set returned by several operating systems (Linux based ones among others) and dar today has not the ability to record such dates (but if you need dar storing negative dates for a good reason please fill a feature request with the reason you need this feature).

    Since release 2.5.13 when dar the system reports a negative date for a file to save, dar asks the user to consider the date was zero, this requires user interaction and may not fit all needs. For that reason, the -az option has been added to automatically assume negative dates read from filesystem to be equal to zero (January 1st 1970, 00:00 GMT) without user interaction.

    I have a diff/incremental backup and I want to convert it to a full backup, how to do that?

    it is possible to convert a differential backup if you also have the full backup is has been based on, in other words: the backup of reference. This is pretty simple to do:

    dar -+ new_full_backup -A backup_of_reference -@ differential_backup full-from-diff [other options]
    new_full_backup
    is a backup that will be created according the provided other options (compression, encryption, slicing, hashing and so on as specified on arguments).
    backup_of_reference
    is the full backup that was used as reference for the differential backup
    differential_backup
    is the differential backup you want to convert into a full backup

    the important point is the last argument "full-from-diff" which is defined in /etc/darrc and makes the merging operation used here (-+ option) working as expected for the resulting backup be the same as if a full backup had been done instead of a differential backup at the time "differential_backup" was created.

    For incremental backups, (backup which reference is not a full backup) you can also use this method but you first need to create the full backup from the incremental/differential backup that has been used as reference for this incremental backup. Thus the process should follow the same order used to create backups.

    How to use dar with tapes (like LTO tapes)?

    dar (Disk Archive) was designed to replace tar (Tape archive) to leverage the direct access brought by disks, something tar was not able to use. A tape by nature does not allow to jump to a given position (or at least, it is so inefficient to skip back and forth, that this is barely used). That said, dar has also evolved to replace tar when it comes to use tapes (like LTO tapes) as backup media. The advantage of dar here is the integrated ciphering, efficient compression (no need to compress already compressed files), resiliency, redundancy and CRC data protection for the most interesting features.

    Backup operation

    dar can produce a backup on its stdout, which can be piped or redirected to a tape device. That's easy:

    dar -c - (other options)... > /dev/tape dar -c - (other options) | some_command > /dev/tape

    Thing get more complicated when the backup exceeds the size of a single tape. For that reason dar_split has been added to the suite of dar programs. Its purpose it to receive the backup produce by dar on its standard input and write it to a given file up to the time the write fails due to lack of space. At that time, it records what was written and what still remains to be written down, close the descriptor for the target file, display a message to the user and waits for the user to hit enter. Then it reopens the file and continues writing the pending data to that target file. The user is expected to have made the necessary for further writing to this same file (or special device) to work, for example by replacing the tape by a new one rewound at its beginning, tape that will be overwritten by the continuation of the dar backup:

    dar -c - (other options)... | dar_split split_output /dev/tape

    Testing operation

    Assuming you have completed your backup over three tapes, you should now be concerned by testing the backup:

    dar_split split_input /de/tape | dar -t - --sequential-read

    Before running the previous call, you should have rewound all your tapes at the offset they had when you used them to write the dar backup (their beginning, most of the time). The first tape should have been inserted in the drive ready for reading. dar nor dar_split know about the location of the data on tape, they will not seek the tape forth or backward, they will just sequentially read (or write depending on the requested operation).

    when dar_split readings will reach the end of the tape, the process pause and let you swap the tape with the following one. You can also take the time to rewind the tape before swapping it, if you want. Once the next tape is ready in the drive and set at the properly offset, just hit enter in the dar_split terminal for the process to continue.

    At the end of the testing dar will report the backup status (hopefully the backup test will succeed) but dar_split does not know anything about that and still continues to try providing data to dar, so you will have to hit CTRL-C to stop it.

    to avoid stopping dar_split by hand, you can indicate to dar_split the number of tapes used for the backup, by mean of -s option. If after the last tape at backup time you wrote an EOF tape mark mt -f /dev/tape weof then dar_split will stop by itself after that number of tape. In our example, the backup expanded over three tapes, where from the -c 3 option:

    dar_split -c 3 split_input /dev/tape | dar -t - --sequential-read

    Listing operation

    Listing operation can be done the same way as the testing operation seen above, just replacing -t by -l:

    dar_split split_input /dev/tape | dar -l - --sequential-read

    But what a pity not to use the isolated catalogue feature! Catalogue isolation let you keep on disk (not on tape) a small file containing the table of content of the backup. Such small backup can be used as backup of the internal catalogue of the backup (which resided on tape) to recovery corruption of that part of the backup (this gives an additional level of protection for backup metadata). It can also be used for backup content listing, it can be provided to dar_manager and most interesting can be used as reference for incremental or differential backups in place of reading the reference backup content from tapes.

    Assuming you did not created an isolated catalogue at the time of the backup, let's do it once the backup has been written to tape:

    dar_split split_input /dev/tape | dar -A - --sequential-read -C isolated -z

    This will lead dar to read the whole backup. Thus, it is more efficient to create it "on-fly", which means during the backup creation process, in order to avoid this additional reading operation:

    dar -c - --on-fly-isolate isolated (other options)... | dar_split split_output /dev/tape

    You will get a small isolated.1.dar file (you can replace isolated after -C or --on-fly-isolate options, by a more meaningful name of course), file located in the current directory by default, while your backup will be sent to tapes, as already seen earlier.

    The isolated catalogue can now be used in place of the backup on tapes, the process becomes much much faster for listing the backup content:

    dar -l isolated (other options like filters)...

    Restoration operation

    You can perform a restoration the same way we did the backup testing above, just replacing -t by -x:

    dar_split split_input /dev/tape | dar -x - --sequential-read (other options like --fs-root and so on)

    But better leverage an isolated catalogue, in particular if you only plan to restore a few files. Without isolated catalogue dar will have to read the whole backup up to its end (the same as tar does but for other reasons) to reach the internal catalogue that contains additional information (like files that have been remove since backup of reference was made). Using an isolated catalogue avoids that and let dar to stop reading earlier, that's to say, once the last file to restore will have been reached in the backup. So if this file is located near the beginning of the backup, you can save a lot of time using an isolated catalogue!

    dar_split split_input /dev/tape | dar -x - -A isolated --sequential-read (other options, like --fs-root and so on)

    Rate limiting

    It is sometime necessary to rate limit the output from and to tapes. dar_split has a -r option for that purpose:

    dar_split -r 10240000 split_input /dev/tape | dar ... dar ... | dar_split -r 20480000 split_output /dev/tape

    Argument to -r option is expected in bytes per second.

    Block size

    Some tape device do not behave well if the data requested or sent to them uses large block of data at once. Usually the operating system knows about that and split application provided data in smaller blocks, if necessary. Sometimes this is not the case, where from the -b option that receives the maximum block size in bytes that dar_split will use. It does not matter whether the block size used when writing is different from the one use at reading time, both must just not exceed the block size supported by the tape device:

    dar_split -b 1048576 split_input /dev/tape | dar ... dar ... | dar_split -b 1048576 split_output /dev/tape

    Differential and incremental backups

    Differential and incremental backups are built the same way: providing the backup of reference at the time of the backup creation, by mean of dar's -A option. One could use dar_split twice for that: once to read the backup of reference from a set of tapes, operation that precedes the backup itself, then a second dar_split command to send the backup to tapes... The problem is that the second backup will open the tape device for writing while it has first to be open for reading by the first dar_split command, in order to fetch the backup of reference.

    Thus, in this context, we have no choice (unless we have two tape drives): we must rely on an isolated catalogue of the backup of reference:

    dar -c - -A isolated_cat_of_ref_backup (other options)... | dar_split split_output /dev/tape

    dar_split and tar

    dar_split is by design a separated command from dar. You can thus use it with any other command than dar, in particular, yes, you can use it with tar if you don't want to rely on the additional features and resiliency dar provides.

    Why dar does not compress small files together for better compression ratio?

    Since around year 2010, this is a question/suggestion/remark/revew that haunted the dar-support mailing-list and new feature requests, resurrecting from time to time: Why dar does not compress small files together in dar archive for better compression, like tar does? (its grand and venerable brother).

    First point to note: tar does not compress at all. This is gzip, bzip2, xz or other similar programs that take as unstructured input what tar outputs, in order to produce an unstructured compressed data stream redirected into a file.

    It would be tempting to answer: "You can do the same with dar!", but there are better things to do, read below.

    But before let's remind dar's design and objectives:

    • compression is done per file
    • a given file's data can be accessed directly

    Doing so, has several advantages:

    • In a given backup/archive, you can avoid compressing some file, while compressing others (gain of time and space, as compressing already compressed file usually leads to waste storage space).
    • You can quickly restore a particular file, even from a several petabytes archive/backup, no need to read (disk IO) and decompress (CPU cycles) all the data present before that file in the archive.
    • Your backups are more robust: if even just one byte data corruption occurred at some place in one of your backup, it will concern only one file, but you will be able to restore all other files, even those located after that corruption. At the opposite, with tar's compression manner, you would lose all data following the data corruption...

    dar is doing that way, because tar's way was not addressing some major concerns in the backup area. Yes, this has the drawback to degrade the compression ratio, but this is a design choice.

    Now, looking for the best of both approaches, some proposed to gather small files together and compress them together. This would not only break all the three advantages exposed above, but also break another feature which is the order in which files are stored: Dar does not inspect twice the same directory at backup time nor at restoration time. Doing so avoids saving the full path of each directory and file (and at two places: in-line metadata and in the catalog). This also leads to better performances as it better leverage disk cache for metadata (directory content). OK, one could say that today with SSD and NVMe this is negligible, but one would ignore that direct RAM access from cache, is still much faster than any NVMe disk access.

    So, if you can't afford keeping small files uncompressed (see dar's --mincompr, -X and -I options for example), or if compressing them with dar versus what tar does makes a so big difference that it worth considering to compress them together, you have three options:

    1. use tar in dar

      • make a tar archive of the many small files you have, just a tar file, without compression. Note: you can automate this when entering some particular directory trees of your choices by mean of -< -> and -= options, and remove those temporary tar file when dar exit those directories at backup time. You would also have to exclude those files used to build the tar file you created dynamically (see -g/-P/-X/-I/-[/-] options).
      • Then let dar perform the backup, compressing those tar files with other files, if they satisfy the --mincompr size, or any other filtering of you choice (see -Z and -Y options). Doing so can let you leverage parallel compression and reduced execution time, brought by dar, something you cannot have with tar alone.
      • Of course, you benefit also of all other dar's features (slicing, ciphering, slice hashing in fly, isolated catalogues, differential/incremental/decremental backups... and even delta binary!)

      But yes, you will lose dar's three advantages seen above, but just for those small files you have gathered in a tar in dar file, not for the rest of what's under backup.

    2. use tar alone

      If dar does not match your need and/or if you do not need to leverage any of the three dar's advantages seen above, tar is probably a better choice for you. That's a pity, but there is not one tool that matches all needs...

    3. describe with details a new implementation/enhancement

      The proposal should take into account dar's design objectives (robustness to data corruption, efficient directory seeking, fast access to any file's data) in a way or another.

      But please, do not make an imprecised proposal, that assumes it will just "magically" work: I only like magic when I go to a magic show ;)

      Thanks to detail both backup and restoration processes. Often times, pulling out the missing details one after the other, results in something unfeasible or with unexpected complexity and/or much less gain than expected. Also look at the Dar Archive Structure to see how it could fit or if not, what part should be redesigned and how.

    dar-2.7.17/doc/samples/0000755000175000017520000000000014767510034011601 500000000000000dar-2.7.17/doc/samples/PN_backup-root.options0000644000175000017520000000117414041360213015747 00000000000000### Options that are appended to the dar command: # No warning when not run from a terminal -Q # Don't try to read darrc files -N # Be verbose (so everything can be logged) -v # No warn on overwrite (should not happen anyway) -w # Compression level -z1 # Keep empty directories as such, so they can be restored -D # Blowfish encryption -K bf:secretpassword # Directory to backup -R "/" # Excludes (must be specified as relative paths to the directory # that is to be backed up) -P "mnt/loop" -P "mnt/storage" -P "mnt/tmp" -P "mnt/backupftp" -P "dev/pts" -P "proc" -P "sys" -P "tmp" -P "var/tmp" -P "usr/tmp" -P "usr/portage/distfiles" dar-2.7.17/doc/samples/etc_darrc0000644000175000017520000001217014740171677013402 00000000000000############################################################# # This is the default system wide configuration file for dar # # This file provide a set of options referred each by a target # name. They are not applied unless you specify that target on # command line or included file. For example for par2: # dar <...list of options...> par2 # # This options set are available automatically for dar unless # you define a .darrc in your home directory or use -N option # on command-line. # You can continue using this default file even if you use your # own .darrc file, by including the following in it or explicitly # command-line: # # -B /etc/darrc # # In the following we are using short options here because long # options may not be available everywhere. ############################################################## # target: par2 # activates: # - par2 file generation when creating an archive # - par2 file verification and correction when testing an archive # usage: dar par2 par2: -B "SOMEPATH/dar_par.dcf" ############################################################## # target: compress-exclusion # avoid compressing types of file known to already be compressed # or to have very bad compression ratio # # usage: dar compress-exclusion compress-exclusion: # here we define some files that have not to be compressed. # First setting case insensitive mode on: -an # Then telling dar that the following masks are glob expression # which is the default, right, but if sooner on command-line the # user swapped to regex, the following mask would not work as expected # any more, so we force back to glob expression in any case: -ag # Now follows all the file specification to never try to compress: # Compressed video format. -Z "*.avi" -Z "*.cr2" -Z "*.flv" -Z "*.jng" -Z "*.m4v" -Z "*.mkv" -Z "*.mov" -Z "*.mp4*" -Z "*.mpeg" -Z "*.mpg" -Z "*.mts" -Z "*.m2ts" -Z "*.oga" -Z "*.swf" -Z "*.vob" -Z "*.webm" -Z "*.wmv" # Compressed animation. -Z "*.mng" # Compressed image format. -Z "*.bmp" -Z "*.gif" -Z "*.ico" -Z "*.jpe" -Z "*.jpeg" -Z "*.jpg" -Z "*.mmpz" -Z "*.mpeg" -Z "*.png" -Z "*.tif" -Z "*.tiff" -Z "*.webp" # Compressed audio format. -Z "*.ac3" -Z "*.als" -Z "*.ape" -Z "*.bonk" -Z "*.flac" -Z "*.m4a" -Z "*.mp2" -Z "*.mp3" -Z "*.mpc" -Z "*.nsf" -Z "*.ogg" -Z "*.speex" -Z "*.spx" -Z "*.weba" -Z "*.wv" # Compressed package. -Z "*.deb" -Z "*.rpm" -Z "*.run" -Z "*.sis" -Z "*.xpi" # Compressed data. -Z "*.7z" -Z "*.Z" -Z "*.bz2" -Z "*.cab" -Z "*.gz" -Z "*.jar" -Z "*.rar" -Z "*.tbz" -Z "*.tbz2" -Z "*.tgz" -Z "*.txz" -Z "*.wsz" -Z "*.wz" -Z "*.xz" -Z "*.zst" -Z "*.zstd" # These are zip files. Not all are compressed, but considering that they can # get quite large it is probably more prudent to leave this uncommented. -Z "*.pk3" -Z "*.zip" # You can get better compression on these files, but then you should be # de/recompressing with an actual program, not dar. -Z "*.lz4" -Z "*.zoo" # Other, in alphabetical order. -Z "*.Po" -Z "*.aar" -Z "*.azw" -Z "*.azw3" -Z "*.bx" -Z "*.chm" -Z "*.djvu" -Z "*.docx" -Z "*.epub" -Z "*.f3d" -Z "*.gpg" -Z "*.htmlz" -Z "*.iix" -Z "*.iso" -Z "*.jin" -Z "*.ods" -Z "*.odt" -Z "*.odp" -Z "*.pdf" -Z "*.pptx" -Z "*.ser" -Z "*.svgz" -Z "*.swx" -Z "*.sxi" -Z "*.whl" -Z "*.wings" -Z "*.xlsx" # These are blender bake files. Compression on these is optional in blender. # Blender's compression algorithm is better at compressing these than xz or # any other compression program that I have tested. # Comment only if you use uncompressed blender bake files. -Z "*.bphys" # Dar archives (may be compressed). -Z "*.dar" # Now we swap back to case sensitive mode for masks which is the default # mode: -acase ############################################################## # target: verbose # show both skipped files and files being processed # # usage: dar verbose verbose: -va ############################################################## # target: no-emacs-backup # ignore temporary files or backup files generated by emacs # no-emacs-backup: -ag -X "*~" -X ".*~" ############################################################## # target: samba # take care of daylight saving time for the samba file system # type samba: -H 1 # samba file system need this to properly report date # and not lead dar to resave all files when changing # from summer to winter time and vice versa. ############################################################## # target: dry-run # an alias for --empty option, that get its name because the # only available option letter was 'e' that leads to this non # intuitive option name "empty". # dry-run: -e ############################################################## # target: bell # ring the terminal upon user interaction request # bell: -b ############################################################## # target: full-from-diff # rebuilds a full backup from a differential backup and its # full backup of reference # usage: dar -+ new_full -A old_ref_full -@ diff full-from-diff # # can also be used to rebuild a full backup from a decremental # backup and a full backup # usage: dar -+ old_full -A recent_fill -@ decr full-from-diff full-from-diff: -/ '{!(~I)}[Rr] {~S}[O*] P* ; {~s}[*o] *p' dar-2.7.17/doc/samples/date_past_N_days0000755000175000017520000000060614403564520014706 00000000000000#!/bin/bash if [ -z "$1" ] ; then echo "usage $0: " echo " returns the date it was N days ago expressed as seconds since 1969" echo "" echo "example: dar -c backup -af -A \`$0 3\` " echo " \"backup\" will only contain files that have changed during the" echo " last 3 days" exit 1 fi echo $(( `date +%s` - $1 * 86400 )) dar-2.7.17/doc/samples/dar_backups.sh0000644000175000017520000001054414041360213014322 00000000000000#!/bin/bash # Script Name: dar_backups.sh # Author: Roi Rodriguez Mendez & Mauro Silvosa Rivera (Cluster Digital S.L.) # Fixes by: Jason Lewis - jason at NO dickson SPAM dot st # Description: dar_backups.sh is a script to be runned from cron which # backups data and stores it locally and optionally remote using scp. # It decides between doing a master or an incremental backup based # on the existance or not of a master one for the actual month. # Revision History: # 23.06.2008 - modified to work with latest version of dar which requires -g before each path to backup - Jason Lewis # 24.10.2006 - changed script to do differential backups based on the last diff # 18.10.2006 - added BACKUP_PATHS variable to simplify adding new paths # Jason Lewis jason@NOdicksonSPAM.st # 22.08.2005 - Creation # Base directory where backups are to be stored BASE_BAK_DIR=/backup # base directory for files to backup. all paths for backing up are listed relative to this path ROOT_DIR=/ # Paths to backup # add paths here, in a space seperated list between round brackets. # you can escape out spaces with \ or '' # Paths should be relative to ROOT_DIR #BACKUP_PATH=(my/first/path another\ path/with\ spaces 'yet another/path/with/spaces') BACKUP_PATHS=( home user/lib/cgi-bin var/www/cgi-bin var/lib/cvs var/lib/svn var/lib/accounting mysql_backup usr/local/bin etc ) # Directory where backups for the actual month are stored (path relative to # $BASE_BAK_DIR) MONTHLY_BAK_DIR=`date -I | awk -F "-" '{ print $1"-"$2 }'` # Variable de comprobacion de fecha CURRENT_MONTH=$MONTHLY_BAK_DIR # Name and path for the backup file. SLICE_NAME=${BASE_BAK_DIR}/${MONTHLY_BAK_DIR}/backup_`date -I` # Max backup file size SLICE_SIZE=200M # Remote backup settings REMOTE_BAK="false" REMOTE_HOST="example.com" REMOTE_USR="bakusr" REMOTE_BASE_DIR="/var/BACKUP/example.com/data" REMOTE_MONTHLY_DIR=$MONTHLY_BAK_DIR REMOTE_DIR=${REMOTE_BASE_DIR}/${REMOTE_MONTHLY_DIR} ######################################################## # you shouldn't need to edit anything below this line # # STR='a,b,c'; paths=(${STR//,/ }); TEST=`echo ${paths[@]//#/-g }`;echo $TEST # args=(); for x in "${paths[@]}"; do args+=(-g "$x"); done; program "${args[@]}" #BACKUP_PATHS_STRING=`echo ${BACKUP_PATHS[@]//#/-g }` args=() for x in "${BACKUP_PATHS[@]}"; do args+=(-g "$x"); done; BACKUP_PATHS_STRING="${args[@]}" echo backup path string is "$BACKUP_PATHS_STRING" ## FUNCTIONS' DEFINITION # Function which creates a master backup. It gets "true" as a parameter # if the monthly directory has to be created. function master_bak () { if [ "$1" == "true" ] then mkdir -p ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR} fi /usr/bin/dar -m 256 -s $SLICE_SIZE -y -R $ROOT_DIR \ $BACKUP_PATHS_STRING -c ${SLICE_NAME}_master #> /dev/null if [ "$REMOTE_BAK" == "true" ] then /usr/bin/ssh ${REMOTE_USR}@${REMOTE_HOST} "if [ ! -d ${REMOTE_DIR} ]; then mkdir -p $REMOTE_DIR; fi" for i in `ls ${SLICE_NAME}_master*.dar` do /usr/bin/scp -C -p $i ${REMOTE_USR}@${REMOTE_HOST}:${REMOTE_DIR}/`basename $i` > /dev/null done fi } # Makes the incremental backups function diff_bak () { MASTER=$1 /usr/bin/dar -m 256 -s $SLICE_SIZE -y -R $ROOT_DIR \ $BACKUP_PATHS_STRING -c ${SLICE_NAME}_diff \ -A $MASTER #> /dev/null if [ "$REMOTE_BAK" == "true" ] then for i in `ls ${SLICE_NAME}_diff*.dar` do /usr/bin/scp -C -p $i ${REMOTE_USR}@${REMOTE_HOST}:${REMOTE_DIR}/`basename $i` > /dev/null done fi } ## MAIN FLUX # Set appropriate umask value umask 027 # Check for existing monthly backups directory if [ ! -d ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR} ] then # If not, tell master_bak() to mkdir it master_bak "true" else # Else: # MASTER not void if a master backup exists # original line to get the master backup does not take into account the diffs # MASTER=`ls ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR}/*_master*.dar | tail -n 1 | awk -F "." '{ print $1 }'` # new master line gets the latest dar backup and uses that to make the diff MASTER=`ls -t ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR}/*.dar | head -n 1 | awk -F "." '{ print $1 }'` # Check if a master backup already exists. if [ "${MASTER}" != "" ] then # If it exists, it's needed to make a differential one diff_bak $MASTER else # Else, do the master backup master_bak "false" fi fi dar-2.7.17/doc/samples/PN_ftpbackup.sh0000644000175000017520000000744214041360213014423 00000000000000#!/bin/bash # ftpbackup.sh - Version 1.1 - 2006-01-09 - Patrick Nagel # Carry out backups automatically and put the resulting # archive onto a backup FTP server. Mail the result to # root. # # Dependencies: ncftp # Change this to your needs ########################### PASSWORDFILE="/root/ftpbackup.credentials" # $PASSWORDFILE must look like this (and should # of course only be readable for the user who # executes the script): # ----------------------------- # |USER="username" | # |PASS="password" | # |SERVER="hostname.of.server"| # ----------------------------- LOGFILE="/root/ftpbackup.log" # The logfile will be gzipped and be available # as $LOGFILE.gz after the script exits. NUMBEROFBACKUPS=2 # How many different backups should this script # carry out? BACKUPCOMMAND[1]="/root/backup-root.sh" # Backup command which carries out 1st backup. # Each backup command must create exactly ONE # archive file. BACKUPCOMMAND[2]="/root/backup-storage.sh" # Backup command which carries out 2nd backup. BACKUPCOMMAND[n]="" # Backup command which carries out nth backup. LOCALBACKUPDIR="/mnt/storage/backup" # This is where the backup archive (must be ONE # FILE!) will be stored by the $BACKUPCOMMAND[x] # program. MOUNTPOINT="/mnt/storage" # The mountpoint of the partition where the # backup archives will be stored on. # For free space statistics. BACKUPFTPQUOTA=42949672960 # Backup FTP server quota or total storage amount # (in bytes). ####################################################### # Initial variables and checks which ncftp &>/dev/null || { echo "Missing ncftp, which is a dependency of this script."; exit 1; } STARTTIME="$(date +%T)" # Functions function backup_to_ftp_start() { ncftpbatch -D return } function backup_to_ftp_queue() { # Puts newest file in ${LOCALBACKUPDIR} to the backup FTP server. source ${PASSWORDFILE} BACKUPFILE="${LOCALBACKUPDIR}/$(ls -t -1 ${LOCALBACKUPDIR} | head -n 1)" ncftpput -bb -u ${USER} -p ${PASS} ${SERVER} / ${BACKUPFILE} return } function backup_local_used() { du -bs ${LOCALBACKUPDIR} | awk '{printf($1)}' return } function backup_local_free() { df -B 1 --sync ${MOUNTPOINT} | tail -n 1 | awk '{printf($4)}' return } function backup_ftp_used() { source ${PASSWORDFILE} ncftpls -l -u ${USER} -p ${PASS} ftp://${SERVER} | grep -- '^-' | echo -n $(($(awk '{printf("%i+", $5)}'; echo "0"))) return } function backup_ftp_free() { echo -n $((${BACKUPFTPQUOTA} - $(backup_ftp_used))) return } function backup_success() { { echo -en "Backup succeeded.\n\nBackup started at ${STARTTIME} and ended at $(date +%T).\n\n" echo -en "Statistics after backup (all numbers in bytes):\n" echo -en "Used on Backup-FTP: $(backup_ftp_used)\n" echo -en "Free on Backup-FTP: $(backup_ftp_free)\n" echo -en "Used on local backup directory: $(backup_local_used)\n" echo -en "Free on local backup directory: $(backup_local_free)\n" } | mail -s "Backup succeeded" root return } function backup_failure_exit() { { echo -en "Backup failed!\n\nBackup started at ${STARTTIME} and ended at $(date +%T).\n\n" echo -en "Statistics after backup failure (all numbers in bytes):\n" echo -en "Used on Backup-FTP: $(backup_ftp_used)\n" echo -en "Free on Backup-FTP: $(backup_ftp_free)\n" echo -en "Used on local backup directory: $(backup_local_used)\n" echo -en "Free on local backup directory: $(backup_local_free)\n" } | mail -s "Backup FAILED" root gzip -f ${LOGFILE} exit 1 } # Main rm -f ${LOGFILE} # In case the script has been aborted before { for ((i=1; i<=${NUMBEROFBACKUPS}; i+=1)); do ${BACKUPCOMMAND[$i]} >>${LOGFILE} 2>&1 && backup_to_ftp_queue >>${LOGFILE} 2>&1 done && \ backup_to_ftp_start >>${LOGFILE} 2>&1 && \ backup_success } || backup_failure_exit gzip -f ${LOGFILE} dar-2.7.17/doc/samples/dar_par.dcf0000644000175000017520000000140314041360213013570 00000000000000# configuration file for dar to have Parchive integrated # with DAR # to be passed to dar as argument of -B option (-B dar_par.dcf) # either directly on command line or through $HOME/.darrc or /etc/darrc # file create: -E 'SOMEPATH/dar_par_create.duc "%p" "%b" %N %e %c 2' # 2 stands for 2% of redundancy # adjust it to your needs test: -E 'SOMEPATH/dar_par_test.duc "%p" "%b" %N %e %c' # note that you may need to set the path to dar_par_test.duc # and dar_par_create.duc, at dar/libdar installation, SOMEPATH # is substitued by the path where these are installed to # fix from Sergey Feo default: -E "echo Warning: dar_par.dcf will not be used in this operation. Please review command line options. -c or -t should be used before -B ...dar_par.dcf" dar-2.7.17/doc/samples/Makefile.in0000644000175000017520000003536614767510000013574 00000000000000# Makefile.in generated by automake 1.16.5 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2021 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = doc/samples ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/gettext.m4 \ $(top_srcdir)/m4/host-cpu-c-abi.m4 $(top_srcdir)/m4/iconv.m4 \ $(top_srcdir)/m4/intlmacosx.m4 $(top_srcdir)/m4/lib-ld.m4 \ $(top_srcdir)/m4/lib-link.m4 $(top_srcdir)/m4/lib-prefix.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/nls.m4 \ $(top_srcdir)/m4/po.m4 $(top_srcdir)/m4/progtest.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(dist_noinst_DATA) \ $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac DATA = $(dist_noinst_DATA) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) am__DIST_COMMON = $(srcdir)/Makefile.in README DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CXXSTDFLAGS = @CXXSTDFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DOXYGEN_PROG = @DOXYGEN_PROG@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GETTEXT_MACRO_VERSION = @GETTEXT_MACRO_VERSION@ GMSGFMT = @GMSGFMT@ GMSGFMT_015 = @GMSGFMT_015@ GPGME_CFLAGS = @GPGME_CFLAGS@ GPGME_CONFIG = @GPGME_CONFIG@ GPGME_LIBS = @GPGME_LIBS@ GPGRT_CONFIG = @GPGRT_CONFIG@ GREP = @GREP@ HAS_DOT = @HAS_DOT@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ INTLLIBS = @INTLLIBS@ INTL_MACOSX_LIBS = @INTL_MACOSX_LIBS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL_CFLAGS = @LIBCURL_CFLAGS@ LIBCURL_LIBS = @LIBCURL_LIBS@ LIBICONV = @LIBICONV@ LIBINTL = @LIBINTL@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTHREADAR_CFLAGS = @LIBTHREADAR_CFLAGS@ LIBTHREADAR_LIBS = @LIBTHREADAR_LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBICONV = @LTLIBICONV@ LTLIBINTL = @LTLIBINTL@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MSGFMT = @MSGFMT@ MSGMERGE = @MSGMERGE@ MSGMERGE_FOR_MSGFMT_OPTION = @MSGMERGE_FOR_MSGFMT_OPTION@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ POSUB = @POSUB@ PYEXT = @PYEXT@ PYFLAGS = @PYFLAGS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ UPX_PROG = @UPX_PROG@ USE_NLS = @USE_NLS@ VERSION = @VERSION@ XGETTEXT = @XGETTEXT@ XGETTEXT_015 = @XGETTEXT_015@ XGETTEXT_EXTRA_OPTIONS = @XGETTEXT_EXTRA_OPTIONS@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dot = @dot@ doxygen = @doxygen@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ groff = @groff@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ pkgconfigdir = @pkgconfigdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ tmp = @tmp@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ upx = @upx@ NO_EXE_SAMPLES = darrc_sample sample1.txt README automatic_backup.txt JH-readme.txt JH_dar_archiver.options JH_darrc cluster_digital_readme.txt index.html PN_backup-root.options PN_backup-storage.options Patrick_Nagel_Note.txt EXE_SAMPLES = cdbackup.sh pause_every_n_slice.duc automatic_backup dar_backup dar_rqck.bash JH-dar-make_user_backup.sh cluster_digital_backups.sh dar_par_create.duc dar_par_test.duc MyBackup.sh.tar.gz PN_backup-root.sh PN_backup-storage.sh PN_ftpbackup.sh dar_backups.sh available_space.duc date_past_N_days dist_noinst_DATA = $(NO_EXE_SAMPLES) $(EXE_SAMPLES) dar_par.dcf etc_darrc all: all-am .SUFFIXES: $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/samples/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/samples/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(DATA) installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) install-data-hook install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-local .MAKE: install-am install-data-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am \ install-data-hook install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-ps install-ps-am install-strip installcheck \ installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags-am uninstall \ uninstall-am uninstall-local .PRECIOUS: Makefile install-data-hook: $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/samples sed -e "s%SOMEPATH%$(pkgdatadir)/samples%g" '$(srcdir)/dar_par.dcf' > $(DESTDIR)$(pkgdatadir)/samples/dar_par.dcf chmod 0644 $(DESTDIR)$(pkgdatadir)/samples/dar_par.dcf for f in $(NO_EXE_SAMPLES); do $(INSTALL) -m 0644 '$(srcdir)/'"$${f}" $(DESTDIR)$(pkgdatadir)/samples; done for f in $(EXE_SAMPLES); do $(INSTALL) -m 0755 '$(srcdir)/'"$${f}" $(DESTDIR)$(pkgdatadir)/samples; done $(INSTALL) -d $(DESTDIR)$(sysconfdir) sed -e "s%SOMEPATH%$(pkgdatadir)/samples%g" '$(srcdir)/etc_darrc' > $(DESTDIR)$(sysconfdir)/darrc uninstall-local: rm -rf $(DESTDIR)$(pkgdatadir)/samples # $(sysconfdir)/darrc not removed as it may contain system admin specific configuration # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: dar-2.7.17/doc/samples/dar_par_test.duc0000755000175000017520000000236014041360213014654 00000000000000#!/bin/sh ### # # this script is to be launched on dar command line when testing an archive with -s option (slicing) # you need to run this script from dar, adding the following argument on command-line # # -E "dar_par_test.duc %p %b %n %e %c" # ### # # if you prefer you can also add the line above in your the $HOME/.darrc file # under the test: conditional statement (see dar man page) # ### # # usage par_script slice.basename slice.number extension # generates a Parchive redundancy file from the slice file # ### if [ "$1" = "" -a "$2" = "" -a "$3" = "" -a "$4" = "" -a "$5" = "" ]; then echo "usage: $0 " echo "$0 tests and if necessary repairs the given slice using Parchive redundancy files" exit 1 fi if [ "$3" = "0" ]; then exit 0 fi PAR=par2 SLICE="$1/$2.$3.$4" if [ ! -r $SLICE ]; then echo "`basename $0`: Cannot find or read the slice $SLICE, skiping, Dar will ask user for it" exit 0; fi echo "$PAR verification slice $SLICE..." if ! $PAR v "$SLICE" ; then echo "trying to repair the slice..." if ! $PAR r "$SLICE" ; then echo "PAR repairation failed. (read-only filesystem ?)" exit 1 fi echo "verifying after reparation..." exec $PAR v "$SLICE" fi dar-2.7.17/doc/samples/JH-readme.txt0000644000175000017520000000056114041360213014003 00000000000000The script makes a backup of user's $HOME, either full of incremental, not compressing compressed files & compressed media and skipping some unimportant directories such as ~/Trash. The darrc is to be in /etc/darrc, dar_archiver.options is used by the script. The script has a part 'OPTIONS TO MODIFY' that shall be altered for customization. Regards, Jakub Holy dar-2.7.17/doc/samples/PN_backup-storage.options0000644000175000017520000000103414041360213016423 00000000000000### Options that are appended to the dar command: # No warning when not run from a terminal -Q # Don't try to read darrc files -N # Be verbose (so everything can be logged) -v # No warn on overwrite (should not happen anyway) -w # Compression level -z1 # Keep empty directories as such, so they can be restored -D # Blowfish encryption -K bf:secretpassword # Directory to backup -R "/mnt/storage/" # Excludes (must be specified as relative paths to the directory # that is to be backed up) -P "backup" -P "tmp" -P "winhome" -P "ftp/cisco" dar-2.7.17/doc/samples/cdbackup.sh0000644000175000017520000001060114041360213013612 00000000000000#!/bin/sh #script for doing sliced full and incremental backups to cdr #stef at hardco.de, 2003 #full backup: "cdbackup.sh full" #incremental backup: "cdbackup.sh " #Reference archive name is the filename of the first slice without .number.dar #Dar will also search/ask for the last reference archive slice. #A plain catalogue file can also be used as an incremental reference. #backups everything starting from / (see DAR_PARAMS) to iso/rr cdrs #Archive slices are stored temporarily in ./ (see TDIR) and get deleted #if written successfully to cdr. #The first cdr will also contain the dar_static executable. #If anything goes wrong while trying to write to cdr, you can try again #or keep the current archive slice as a file in ./ (see TDIR). #For backing up to files only, simply accept the cdr write error and #answer with 'keep file' (or even better: use dar directly). #Slice size is for 700MB cdr blanks, see (and maybe change) DAR_PARAMS below. #For (slow!) compression, add a -y or -z parameter to DAR_PARAMS. #The archive slice file names are: #- for full backups: YYYY-MM-DD..dar #- for incrementals: YYYY-MM-DD_YYYY-MM-DD..dar # The second date is the name of the reference archive, so you can end # up with names like YYYY-MM-DD_YYYY-MM-DD_YYYY-MM-DD_YYYY-MM-DD.1.dar # for a four level stacked incremental backup. #Files which don't get backed up: (see DAR_PARAMS below) #- the slice files of the current archive #- the slice files of the reference archive #- files called "darswap" (for manually adding more swap space for incrementals) #- directory contents of /mnt, /cdrom, /proc, /dev/pts #hints: #- You need at least 700MB of free disk space in ./ (or in TDIR, if changed). #- For incrementals, you need about 1KB of memory per tested file. # Create a large file "darswap" and add this as additional swap space. #- If you are doing more than one backup per day, the filenames may interfere. #- Carefully read the dar man page as well as the excellent TUTORIAL and NOTES. #uncompressed, for 700MB cdr blanks: DAR_PARAMS="-s 699M -S 691M -R / -P dev/pts -P proc -P mnt -P cdrom -D" #temporary or target directory: TDIR="." #I'm using a USB CDR drive, so i don't know which 'scsi'-bus it is on. #Cdrecord -scanbus is grepped for the following string: DRIVENAME="PLEXTOR" #Also because of USB i have to limit drive speed: DRIVESPEED=4 #used external programs: DAR_EXEC="/root/app/dar-1.3.0/dar" #tested: dar-1.3.0 DAR_STATIC="/root/app/dar-1.3.0/dar_static" #copied to the first cdr MKISOFS="/root/app/cdrtools-2.0/bin/mkisofs" #tested: cdrtools-2.0 CDRECORD="/root/app/cdrtools-2.0/bin/cdrecord" #tested: cdrtools-2.0 GREP="/usr/bin/grep" #tested: gnu grep 2.2 BASENAME="/usr/bin/basename" DATECMD="/bin/date" MKDIR="/bin/mkdir" MV="/bin/mv" CP="/bin/cp" RM="/bin/rm" #initial call of this script (just executes dar with the proper parameters): DATE=`$DATECMD -I` START=`$DATECMD` if [ -n "$1" ] && [ -z "$2" ] ; then if [ "$1" = "full" ] ; then echo "starting full backup" $DAR_EXEC -c "$TDIR/$DATE" \ -X "$DATE.*.dar" -X "darswap" \ -N $DAR_PARAMS -E "$0 %p %b %N" else echo "starting incremental backup based on $1" LDATE=`$BASENAME $1` $DAR_EXEC -c "$TDIR/${DATE}_$LDATE" -A $1 \ -X "${DATE}_$LDATE.*.dar" -X "$LDATE.*.dar" -X "darswap" \ -N $DAR_PARAMS -E "$0 %p %b %N" fi echo "backup done" echo "start: $START" echo "end: `$DATECMD`" #called by dar's -E parameter after each slice: elif [ -r "$1/$2.$3.dar" ] ; then echo -n "creating cdr $3 volume dir containing $2.$3.dar" $MKDIR "$1/$2.$3.cdr" $MV "$1/$2.$3.dar" "$1/$2.$3.cdr" if [ "$3" = "1" ] ; then echo -n " and dar_static" $CP $DAR_STATIC "$1/$2.$3.cdr" fi echo DEV=`$CDRECORD -scanbus 2>/dev/null | $GREP $DRIVENAME | cut -b2-6` CDBLOCKS=`$MKISOFS -R -print-size -quiet $1/$2.$3.cdr` echo "writing cdr $3 (${CDBLOCKS}s)..." KEEPFILE="n" until $MKISOFS -R "$1/$2.$3.cdr" | \ $CDRECORD -eject -s dev=$DEV speed=$DRIVESPEED tsize=${CDBLOCKS}s - do echo -n "write error, try [A]gain or [k]eep $2.$3.dar? " read ERR if [ "$ERR" = "k" ] ; then KEEPFILE="y" break fi done if [ "$KEEPFILE" = "y" ] ; then echo "cdr not written, keeping $2.$3.dar as file" $MV "$1/$2.$3.cdr/$2.$3.dar" "$1/$2.$3.dar" fi echo "removing volume dir" $RM -rf "$1/$2.$3.cdr" echo "backup continues" else echo "usage: $0 " fi exit 0 dar-2.7.17/doc/samples/dar_backup0000644000175000017520000001070014403564520013531 00000000000000#!/usr/bin/perl -w use strict; use diagnostics; # Device that is the DVD drive my $DVD=("/dev/hdc"); # Size of each slice - DVD max is 4482M # MC - for testing # my $SLICE_SIZE=("10M"); # my $SLICE_SIZE=("4400M"); # doesn't work # BUG - Linux isofs limited to single files of 2^32=4096MB # my $SLICE_SIZE=("4000M"); # value used by Daromizer is bigger than mine, use it # my $SLICE_SIZE=("4189500K"); # need more space for parity data my $SLICE_SIZE=("4000M"); # directory that all paths must be relative to # NOTE - all backup paths are relative to this my $ROOT_DIR=("/mnt/backup"); # where all created files will be stored my $STORAGEDIR=("/mnt/backup/backups/"); # list of dirs to be backed up # NOTE 1 - these are paths relative for $ROOT_DIR, above # NOTE 2 - this is used for naming; everything after the last / is used # for the base name. DO NOT have two things be the same (like /usr/bin and # /usr/local bin). Otherwise, one will be overwritten # MC for testing # my @BACKUPDIRS=("test"); my @BACKUPDIRS=("local","home","pub"); # this the path to the slice as expressed in things that dar will # substitute the right values for (it's just used in 2 places) my $SLICE_PATH=("%p/%b.%N.%e"); my $SLICE_NAME=("%b.%N"); my $PARITY_PATH=("%p/%b.%N.par2"); # par2 creates a bunch of "vol" files, we need those too my $PARITY_FILES=("%p/%b.%N.*.par2"); # list of stuff to be compressed. This must be in the form of # -Z \"*.mask\" # with -Z repeated for each one my $NO_COMPRESS_LIST=("-Z \"*.gz\" -Z \"*.GZ\" -Z \"*.bz2\" -Z \"*.BZ2\" -Z \"*.zst\" -Z \"*.ZST\" -Z \"*.zip\" -Z \"*.ZIP\" -Z \"*.ogg\" -Z \"*.OGG\" -Z \"*.mp3\" -Z \"*.MP3\" -Z \"*.mpg\" -Z \"*.MPG\" -Z \"*.mpeg\" -Z \"*.MPEG\" -Z \"*.wmv\" -Z \"*.WMV\" -Z \"*.avi\" -Z \"*.AVI\" -Z \"*.jpg\" -Z \"*.JPG\" -Z \"*.jpeg\" -Z \"*.JPEG\" -Z \"*.png\" -Z \"*.PNG\" -Z \"*.gif\" -Z \"*.GIF\""); my $PRE_PARITY_MESSAGE=("echo ; echo Caclulating parity information; echo"); my $PARITY_COMMAND=("par2create -r10 $PARITY_PATH $SLICE_PATH"); my $PRE_BLANK_MESSAGE=("echo ; echo Done archive, erasing DVD; echo"); my $BLANK_COMMAND=("dvd+rw-format -force /dev/hdc"); my $PRE_REC_MESSAGE=("echo ; echo Done erasing, burning to DVD; echo"); # Command to record the DVD, with options # -dvd-compat = make the most compatible DVD by closing the session # -Z = create a new session # -r = generate sane rock ridge extensions # -J = generate Joliet extensions # -V = volume ID # %b = dar will substitute the base name # %N = dar will substitute the number of the slice # %p = dar will substitute slice path # FOR TESTING = -dry-run my $RECORD_COMMAND=("growisofs -dvd-compat -Z $DVD -r -J -V $SLICE_NAME $SLICE_PATH $PARITY_PATH $PARITY_FILES"); my $EJECT_COMMAND=("eject $DVD"); my $POST_REC_MESSAGE=("echo ; echo Done burning $SLICE_NAME ; echo"); # deletes files once done with them # note - use AFTER record command # MC - for testing # my $DELETE_COMMAND=("echo deleting $SLICE_PATH $PARITY_PATH $PARITY_FILES"); my $DELETE_COMMAND=("rm -f $SLICE_PATH $PARITY_PATH $PARITY_FILES"); # dar with basic options # -y = compress with bzip2 using default compression of 6 # -s = slice it up # -R = root dir that all things to be backed up live in # -D = store empty directories too # -p = pause and wait for user to change DVD before continuing # -c (used below) = create an archive called whatever # FOR TESTING = -e my $DAR=("dar -y -s $SLICE_SIZE -R $ROOT_DIR -D $NO_COMPRESS_LIST -p -E \"$PRE_PARITY_MESSAGE ; $PARITY_COMMAND ; $PRE_BLANK_MESSAGE ; $BLANK_COMMAND ; $PRE_REC_MESSAGE ; $RECORD_COMMAND ; $EJECT_COMMAND ; $DELETE_COMMAND ; $POST_REC_MESSAGE\""); &main; sub main{ my $backup_base; my $backupdir; my ($day, $month, $year) = (localtime)[3,4,5]; $year+=1900; # compensate for 1900 based year $month+=1; # compensate for base 0 months my $targetbase; my $pause; # garbage input... foreach $backupdir (@BACKUPDIRS){ # this gets rid of paths and such from $backupdir, just in case $backup_base=$backupdir; $backup_base =~ s/^\///; # remove leading / $backup_base =~ s/\w+\///g; # remove everything matching "someword/" $targetbase=$STORAGEDIR.$backup_base."_".$month."_".$day."_".$year; print("Working on $backup_base\n"); # MC for debugging # print("Command is: $DAR $backupdir -c $targetbase"); system("$DAR $backup_base -c $targetbase"); print "Work on $backup_base complete. Change the DVD and\n"; print "press any key to continue..."; $pause = ; #Like a PAUSE statement in DOS .bat files } } dar-2.7.17/doc/samples/JH_dar_archiver.options0000644000175000017520000000201214041360213016126 00000000000000################################# # # # DAR Archiver - options # # # ################################# # -m N - do not compress files smaller then N [B] # -Z pattern - matching files are not compressed # -P subdir - ignore (don't backup) directories matching the pattern; relative to -R # -X pattern - exclude files matching pattern; it may not include a file path, only the name # -R /home/aja - the directory to backup # -s 700M - cut the archive into 'slices' (parts) of max. size 700 MB # -y [level] - compress with bzip2 # -G - generuj zvlast katalog archivu # -D,--empty-dir - vtvor prazdne adresare pro ty excludovane (s -P) # -M - skip other filesystems (tj. namountovane FS). # -v - verbose output # --beep - pipni kdyz je pozadovana uzivatelova akce # !!! The option -c , has to be on the cmd line # !!! The option -R as well ## Obecne volby -s 700M -m 256 -y -M -v --empty-dir --beep ## Preskocene adresare -P .java/deployment -P .netbeans/var -P Trash dar-2.7.17/doc/samples/dar_par_create.duc0000755000175000017520000000200514041360213015134 00000000000000#!/bin/sh ### # # this script is to be launched on dar command line when creating an archive with -s option (slicing) # you need to run this script from dar, adding the following argument on command-line # # -E "dar_par_create.duc %p %b %N %e %c 20" # # note that 20 means 20% of redundancy, tune it to your needs # ### # # if you prefer you can also add the line above in your the $HOME/.darrc file # under the create: conditional statement (see dar man page) # ### # # usage par_script slice.basename slice.number extension level # generates a Parchive redundancy file from the slice file # ### if [ "$1" = "" -a "$2" = "" -a "$3" = "" -a "$4" = "" -a "$6" = "" ]; then echo "usage: $0 " echo "$0 builds Parchive redundancy file for the given slice" exit 1 fi # change according to you need PAR=par2 echo "creating PAR file for file $1/$2.$3.dar ..." exec $PAR c -r$6 -n1 "$1/$2.$3.$4" # script returned code it those of par dar-2.7.17/doc/samples/JH-dar-make_user_backup.sh0000644000175000017520000001177514041360213016416 00000000000000#!/bin/sh ################################# # # # DAR Archiver script # # # ################################# # Jakub Holy 25.4.2005 # This file: $HOME/bin/dar-make_user_backup.sh # IMPORTANT: This script depends upon /etc/darrc (options what not to compress/ archive) # But the file is ignored if $HOME/.darrc exists. # Additional options are read from dar_archiver.options (see # $DAR_OPTIONS_FILE below) USAGE="echo -e USAGE: \n$0 -full | -inc" # ----------------------------- OPTIONS TO MODIFY DIR_TO_ARCHIVE=$HOME DEST_DIR=/mnt/mandrake/debian-bkp/ DAR_OPTIONS_FILE="$HOME/bin/dar_archiver.options" ARCHIVE_NAME="`/bin/date -I`_$USER" # Ex: 2005-04-25_jholy DAR_INFO_DIR="$HOME/backup" DAR_MANAGER_DB=${DAR_INFO_DIR}/dar_manager_database.dmd LAST_FULL_BACKUP_ID="2005-04-25" # The last full backup - the unique part of its name LAST_FULL_BACKUP=${DAR_INFO_DIR}/${LAST_FULL_BACKUP_ID}_aja-full-katalog MSG="" LOG_FILE="${DAR_INFO_DIR}/zaloha-aja-dar.log" # PARSE COMMAND LINE --------------------------------------------- INC_BKP_OPT="" # dar options needed to create an incremental backup: empty => full bkp if [ $# -ne 1 ]; then echo "ERROR: Wrong number of parameters" $USAGE exit 1 elif [ "X$1" != "X-full" -a "X$1" != "X-inc" ]; then echo "Unknown parameter" $USAGE exit 1 else if [ "X$1" = "X-full" ]; then echo "DAR: Doing FULL backup."; ARCHIVE_NAME="${ARCHIVE_NAME}-full" fi if [ "X$1" = "X-inc" ]; then echo "DAR: Doing INCREMENTAL backup with respect to $LAST_FULL_BACKUP."; INC_BKP_OPT=" -A $LAST_FULL_BACKUP " ARCHIVE_NAME="${ARCHIVE_NAME}-inc-wrt${LAST_FULL_BACKUP_ID}" fi echo "See the log in $LOG_FILE" fi # ----------------------------- OPTIONS CONT'D ARCHIVE=${DEST_DIR}/${ARCHIVE_NAME} CATALOGUE=${DAR_INFO_DIR}/${ARCHIVE_NAME}-katalog echo "-----------------------" >> "$LOG_FILE" # -m N - soubory pod N [B] nejsou komprimovany # -Z pattern - soub. odpovidajici vzoru nejsou komprimovany # -P subdir - adresare kt. se nezalohuji; relativni w.r.t. -R # -X pattern - exclude files matching pattern; nesmi tam byt file path # -R /home/aja - adresar, ktery zalohujeme # -s 700M - na jak velke kusy se archiv rozseka # -y [level] - proved bzip2 kompresi # -c `date -I`_bkp - vystupni archiv (pribude pripona .dar) # -G - generuj zvlast katalog archivu # -D,--empty-dir - vtvor prazdne adresare pro ty excludovane (s -P) # -M - skip other filesystems (tj. namountovane FS). # -v - verbose output # --beep - pipni kdyz je pozadovana uzivatelova akce # -A basename - vytvor incremental backupwrt archive se zakladem jmena 'basename' # Misto archivu lze pouzit i catalog. # Soubory kt. nelze komprimovat (upper i lower case): # bz2 deb ear gif GIF gpg gz chm jar jpeg jpg obj pdf png rar rnd scm svgz swf # tar tbz2 tgz tif tiff vlt war wings xpi Z zargo zip trezor COMMAND="dar -c $ARCHIVE -R $DIR_TO_ARCHIVE -B $DAR_OPTIONS_FILE $INC_BKP_OPT" echo "Backup started at: `date`" >> "$LOG_FILE" echo "Making backup into $ARCHIVE; command: $COMMAND" >> "$LOG_FILE" echo "Making backup into $ARCHIVE; command: $COMMAND" ### ARCHIVACE ----------------------------------------------------------------- $COMMAND # Perform the archive command itself RESULT=$? # Get its return value ( 0 == ok) ### TEST THE OUTCOME if [ $RESULT -eq 0 ]; then ## Check the archive ........................................................ echo "Backup done at: `date`. Going to test the archive." >> "$LOG_FILE" echo "Backup done at: `date`. Going to test the archive." if dar -t $ARCHIVE # > /dev/null # to ignore stdout in cron uncomment this then MSG="Archive created & successfully tessted."; else MSG="Archive created but the test FAILED"; fi echo "Test of the archive done at: `date`." >> "$LOG_FILE" echo "Test of the archive done at: `date`." else MSG="The backup FAILED (error code $RESULT)" echo "$MSG" >> "$LOG_FILE" echo >> "$LOG_FILE" echo -n "Ended at: " >> "$LOG_FILE" date >> "$LOG_FILE" echo >> "$LOG_FILE" echo "$MSG" exit 1 fi ### KATALOG - import into the manager ............................................ echo "Going to create a catalogue of the archive..." >> "$LOG_FILE" echo "Going to create a catalogue of the archive..." dar -C "$CATALOGUE" -A "$ARCHIVE" dar_manager -v -B "$DAR_MANAGER_DB" -A "$ARCHIVE" echo "The catalogue created in $CATALOGUE and imported into the base $DAR_MANAGER_DB" >> "$LOG_FILE" echo "The catalogue created in $CATALOGUE and imported into the base $DAR_MANAGER_DB" echo "$MSG" >> "$LOG_FILE" echo >> "$LOG_FILE" echo -n "Ended at: " >> "$LOG_FILE" date >> "$LOG_FILE" echo >> "$LOG_FILE" echo "$MSG" ### Incremental backup # -A dar_archive - specifies a former backup as a base for this incremental backup # Ex: dar ... -A a_full_backup # there's no '.dar', only the archive's basename # Note: instead of the origina dar_archive we can use its calatogue ### Extract the catalogue from a backup # Ex: dar -A existing_dar_archive -C create_catalog_file_basename dar-2.7.17/doc/samples/PN_backup-storage.sh0000644000175000017520000000015514041360213015345 00000000000000#!/bin/bash dar -c "/mnt/storage/backup/storage_$(date +%Y-%m-%d-%H%M%S)" -B "/root/backup-storage.options" dar-2.7.17/doc/samples/dar_rqck.bash0000644000175000017520000000111714041360213014131 00000000000000#!/bin/bash MT=$(sed '/^MemTotal/!d;s/.* //' /proc/meminfo) echo -e "\n\tyou have $MT total memory" ST=$(sed '/^SwapTotal/!d;s/.* //' /proc/meminfo) echo -e "\n\tyou have $ST total swap" P=$(mount | sed '/^none/d' | awk '{print $3}') for p in $P do fc=$(find $p -xdev \ -path '/tmp' -prune -o \ -path '/var/tmp' -prune -o \ -print | wc -l) echo -e "\n\tpartition \"$p\" contains $fc files" (( iioh = ($fc * 1300)/1024 )) echo -e "\tdar differential backup with infinint requires $iioh kB memory" done echo # /proc and /sys (and /dev if it's udev) are excluded by "-xdev" dar-2.7.17/doc/samples/Patrick_Nagel_Note.txt0000644000175000017520000000237614041360213015745 00000000000000Follows a copy from Patrick Nagel site at http://www.patrick-nagel.net/scripts/ftpbackup ----------------------------------------------- I wrote ftpbackup.sh to conveniently backup my root server. My root server provider offers a 40 GB FTP storage, where I can store backup archives. To put them on there by hand was a bit of a hassle, so I wrote this little script. It calls my backup scripts (namely backup-root.sh and backup-storage.sh) which both create a .dar file that contains the whole backup. This .dar file is then being sent to the provider's backup FTP server. After everything is done, a mail is sent to root which informs about successful completion or failure, and the used/free space on the FTP as well as on the local backup partition. Configuration is done in the script, everything is explained there. The two scripts backup-root.sh and backup-storage.sh are two examples how to create the backups. I'm using these scripts for quite some time, and also did two full recoveries without any problems. backup-root.sh includes backup-root.options and backup-storage.sh includes backup-storage.options through dar's "-B" option. All options in those .options files are documented, so it should be easy for anybody to understand what the script does, and how. dar-2.7.17/doc/samples/available_space.duc0000755000175000017520000000171714041360213015305 00000000000000#!/bin/sh if [ -z "$1" -o -z "$2" -o -z "$3" -o -z "$4" -o -z "$5" -o -z "$6" -o -z "$7" ]; then echo "This script is expected to be run from dar this way:" echo "dar ... -E \"$0 %p %b %n %e %c \" ..." echo "where %p %b ... %c are to be used verbatim, while is to be" echo "replaced by the path of the mounted filesystem to monitor" echo "and by the minimum space required to store a full slice" exit 1 fi SLICE_PATH="$1" SLICE_BASENAME="$2" SLICE_NUMBER="$3" SLICE_EXTENSION="$4" DAR_CONTEXT="$5" MOUNT_POINT="$6" SLICE_SIZE="$7" FREE=`df $MOUNT_POINT | grep '/' | sed -re 's/.*[ ]+([0-9]+)[ ]+[0-9]+%.*/\1/'` while [ $FREE -le $SLICE_SIZE ]; do FREE=`df $MOUNT_POINT | grep '/' | sed -re 's/.*[ ]+([0-9]+)[ ]+[0-9]+%.*/\1/'` echo Free space on $MOUNT_POINT is $FREE KB echo "Waiting for disk change... Press enter when ready to continue!" read i done echo "Continuing with slice $SLICE_NUMBER" dar-2.7.17/doc/samples/index.html0000644000175000017520000001761314403564520013522 00000000000000 Dar - Scripts and Examples
    Dar Documentation

    Scripts and Examples

    In this page you can find several script and configuration files, that have been sent by dar users. They should all work, if some do not, see them as illustration or examples that let you have a base to your own configuration scripts.

    You will here both DUC files (Dar User Commands) that can be launched from dar thanks to its -E or -F options as well as scripts from where dar is launched:
    Description Author Type Download
    Script that use dar to make full or differential backup to CDR stef at hardco.de Script cdbackup.sh
    Sample /etc/darrc or ~/.darrc file "(me)" DCF darrc_sample
    For those who like to learn with examples (a rich one) ;-) Henrik Ingo DCF sample1.txt
    Script to create PAR redundancy data for protection against media corruption Denis Corbin DUC dar_par_create.duc
    Script to test and repair slice with redundancy data Denis Corbin DUC dar_par_test.duc
    Dar config files for dar_par_create and dar_par_test.duc DCF dar_par.dcf
    let dar pause every N slice instead of every slice when creating an archive [This is now obsolete as the -p option can now receive an argument to tell every how much slice to pause] Denis Corbin DCF pause_every_n_slice.duc
    Automatic full/differential backup script, with automatic mounting unmount see comments inside automatic_backup.txt for more info, see also this documentation file (same Author). Manuel Iglesias Script automatic_backup
    Perl script wrapping: dar+parchive+growisofs Matthew Caron Script dar_backup
    Bash script for Linux users to have raw estimation of the required amount of virtual memory to use to be able to save the whole system. Bob Barry Script dar_rqck.bash
    To save your home directory without worry (skip trash directory, make full or differential backup), all is explain by the author in this tiny document. Jakub Holy Script DCF
    DCF
    JH-dar-make_user_backup.sh
    JH_darrc
    JH_dar_archiver.options
    Local or remote backup script (using scp) to be launched from a cron, doing automatic decision whether the backup has to be full or incremental Roi Rodriguez Mendez & Mauro Silvosa Rivera (Cluster Digital S.L.) Script cluster_digital_backups.sh
    Shell script to backup to an FTP server Patrick Nagel Script

    Note.txt
    ftpbackup.sh
    backup-root.sh
    backup-root.options
    backup-storage.sh
    backup-storage.options

    Enhanced version of the Script done by Roi and Mauro  (see cluster_digital_backups.sh) above Jason Lewis Script dar_backups.sh

    A very complete script that:

    • can perform Logging
    • uses configuration files (see attached sample including usage comments)
    • can use Snapshots (if fs_root is on an LVM volume)
    • do DVD formatting
    • can write create a dar archive to DVD.

    The design requires that each backup job fits on a single DVD, optionally writing a directory and contents to DVD. This allows:

    • copying system documentation to DVD for reference during system recovery
    • Writing dar_static to DVD for potential use during system recovery
    • Writing /etc/lvm and contents to DVD for potential use during
    • system recovery
    • Options to restart failed DVD operations by skipping to DVD
    • writing and to DVD verification
    • Extensive error trapping

    The script contains itself a very detailed user information.

    Charles Script MyBackup.sh.tar.gz
    A shell script to replace -p option when one need to pause before dar lacks space to add a new slice on the disk. This may be of some use when using support of different sizes to store a given archive. You then need to choose the size of slices (-s option) as the biggest common divisor of all slice sizes to let dar handle this situation quite nicely. Denis Corbin DUC available_space.duc

    Auxiliary script to use when calling dar to only backup files that have changed in the last N days, where N is an integer number given as argument to this script.

    usage: dar -c backup -af -A `./date_past_N_days 3` <other options to dar...>

    Denis Corbin Script date_past_N_days
    dar-2.7.17/doc/samples/darrc_sample0000644000175000017520000000150014403564520014070 00000000000000###### # this is an example of what could be a batch file # (given to -B option), a /etc/darrc and a $HOME/.darrc file # reminds that it is a simple example... # all: # make terminal bell when user action is requested -b create: # a list of file to not try to compress -X "*_all_*.*.dar" -X "*_diff_*.*.dar" -X "*_inc_*.*.dar" -Z "*.mpg" -Z "*.MPG" -Z "*.jpg" -Z "*.JPG" -Z "*.gz" -Z "*.tgz" -Z "*.bz2" -Z "*.zst" -Z "*.tbz" -Z "*.mp3" -Z "*.mpeg" -Z "*.zip" -Z "*.dar" # create empty dir for excluded directories -D -R / # we don't save these directories -P tmp -P var/tmp -P mnt -P proc -P dev/pts # here we say we don't want to save dar files -X "*.*.dar" # we pause before starting a new slices -p # and we use gzip compression -z default: # if no action is given then show the version # in place of the usage help -V dar-2.7.17/doc/samples/cluster_digital_backups.sh0000644000175000017520000000577414041360213016743 00000000000000#!/bin/bash # Script Name: dar_backups.sh # Author: Roi Rodriguez Mendez & Mauro Silvosa Rivera (Cluster Digital S.L.) # Description: dar_backups.sh is a script to be runned from cron which # backups data and stores it locally and optionally remote using scp. # It decides between doing a master or an incremental backup based # on the existance or not of a master one for the actual month. The # remote copy feature needs a ssh authentication method which # doesn't prompt for a password, in order to make it non-interactive # (useful for cron, if you plan to run it by hand, this is not # necessary). # Version: 1.0 # Revision History: # 22.08.2005 - Creation # Base directory where backups are stored BASE_BAK_DIR=/var/BACKUP/data # Directory where backups for the actual month are stored (path relative to # $BASE_BAK_DIR) MONTHLY_BAK_DIR=`date -I | awk -F "-" '{ print $1"-"$2 }'` # Variable de comprobacion de fecha CURRENT_MONTH=$MONTHLY_BAK_DIR # Name and path for the backup file. SLICE_NAME=${BASE_BAK_DIR}/${MONTHLY_BAK_DIR}/backup_`date -I` # Max backup file size SLICE_SIZE=100M # Remote backup settings REMOTE_BAK="true" REMOTE_HOST="example.com" REMOTE_USR="bakusr" REMOTE_BASE_DIR="/var/BACKUP/example.com/data" REMOTE_MONTHLY_DIR=$MONTHLY_BAK_DIR REMOTE_DIR=${REMOTE_BASE_DIR}/${REMOTE_MONTHLY_DIR} ## FUNCTIONS' DEFINITION # Function which creates a master backup. It gets "true" as a parameter # if the monthly directory has to be created. function master_bak () { if [ "$1" == "true" ] then mkdir -p ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR} fi /usr/bin/dar -m 256 -s $SLICE_SIZE -y -R / \ -g ./DATA -g ./home -g ./root -c ${SLICE_NAME}_master > /dev/null if [ "$REMOTE_BAK" == "true" ] then /usr/bin/ssh ${REMOTE_USR}@${REMOTE_HOST} "if [ ! -d ${REMOTE_DIR} ]; then mkdir -p $REMOTE_DIR; fi" for i in `ls ${SLICE_NAME}_master*.dar` do /usr/bin/scp -C -p $i ${REMOTE_USR}@${REMOTE_HOST}:${REMOTE_DIR}/`basename $i` > /dev/null done fi } # Makes the incremental backups function diff_bak () { MASTER=$1 /usr/bin/dar -m 256 -s $SLICE_SIZE -y -R / \ -g ./DATA -g ./home -g ./root -c ${SLICE_NAME}_diff \ -A $MASTER > /dev/null if [ "$REMOTE_BAK" == "true" ] then for i in `ls ${SLICE_NAME}_diff*.dar` do /usr/bin/scp -C -p $i ${REMOTE_USR}@${REMOTE_HOST}:${REMOTE_DIR}/`basename $i` > /dev/null done fi } ## MAIN FLUX # Set appropriate umask value umask 027 # Check for existing monthly backups directory if [ ! -d ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR} ] then # If not, tell master_bak() to mkdir it master_bak "true" else # Else: # MASTER not void if a master backup exists MASTER=`ls ${BASE_BAK_DIR}/${MONTHLY_BAK_DIR}/*_master*.dar | tail -n 1 | awk -F "." '{ print $1 }'` # Check if a master backup already exists. if [ "${MASTER}" != "" ] then # If it exists, it's needed to make a differential one diff_bak $MASTER else # Else, do the master backup master_bak "false" fi fi dar-2.7.17/doc/samples/automatic_backup0000644000175000017520000007711414403564520014765 00000000000000#Written by Manuel Iglesias. glesialo@tiscali.es #Notes: SystemDirectory=/sbin # This file should be copied (by CopySystemFiles) to its corresponding Directory (see above). # Exit codes at the end of this file. CommandName=`basename $0` ######################################################### # BACKUP SETUP. BEGIN. Read Dar Doc before modification. ######################################################### # Permissions. ################## # Allow use only in run level 1. CheckRunLevel=false # # Allow use only by root (Super user). CheckUser=true # ######################################################### # Paths and files. ################## # Directories. ######### # Backup files Directory: Absolute path (Should start with '/'!!). Don't end it with '/' unless it is '/'. DestinationDir=/store/.Store/Backup # # Origin of Backup/Restore Directory: Absolute path (Should start with '/'!!). # Don't end it with '/' unless it is '/'. OriginDir=/ # # Directories to backup. Relative to Origin of Backup Dir! Empty means: all dirs # (Except those in Directories to ignore. See below.). Separate with spaces. SubDirsToBackup="root home" # # Directories to ignore. Relative to Origin of Backup Dir! Separate with spaces. SubDirsToIgnore="home/manolo2 home/manolo/documents/Secret */.Trash* .Trash*\ */.mozilla/*/[Cc]ache */.opera/[Cc]ache* */.pan/*/[Cc]ache */.thumbnails" # # DestinationDir will be automatically included in SubDirsToIgnore if DestinationDir is a subdirectory # of OriginDir. If you want to include the base (IE.: Temp if DestinationDir: OriginDir/Temp/Backup) of # DestinationDir instead, set constant IgnoreBaseOfDestinationDir to true. Value (true | false). IgnoreBaseOfDestinationDir=true # # File systems that should be mounted for a correct backup. If any of them has to be mounted, # it will be umounted before this shellscript exits. Please mind mounting order!! # Absolute path (Should start with '/'!!). Separate with spaces. DirsToMount="/home /home/common /store" # ################## # Files. ######### # Files to backup. Empty: all files (Except those in Files to ignore. See below.). # No Path. Separate with spaces. FilesToBackup="" # # Files that should not be included in backup. No Path. Separate with spaces. FilesToIgnore="*~ .*~ cryptfile0.crypt cryptfile1.crypt" # # Files that should not to be compressed. No Path. Separate with spaces. FilesNotToCompress="*.dar *.crypt *.arj *.bz2 *.bz *.zst *.Z *.tgz *.taz *.cpio *.deb\ *.gtar *.gz *.lzh *.lhz *.rar *.rpm *.shar *.sv4cpi *.sv4crc *.tar *.ustar *.zoo\ *.zip *.jar *.jpg *.gif *.mpg *.mpeg *.avi *.ram *.rm" # ######################################################### # Parameters used to choose Differential Backup level. ################## BlockSize=1024 # # When Diffbackup > (MaxDiffPercentOfFullBackup% of FullBackup): New FullBackup recommended. MaxDiffPercentOfFullBackup=30 # # When Diffbackup < (MinDiffPercentOfFullBackup% of FullBackup): Rewrite first DiffBackup recommended. MinDiffPercentOfFullBackup=3 # # Max 99. If (Nr of DiffBackups) > MaxNrOfDiffBackups: Rewrite first DiffBackup recommended. MaxNrOfDiffBackups=20 # ######################################################### # Dar settings and options. ################## #Used dar suite program names. DarManagerName=dar_manager DarName=dar # # Directory where dar usually resides. Absolute path (Should start with '/'!!). Don't end it with '/'. DarDir=/usr/local/bin # # Create empty sub-directories in backup instead of those not saved. Value (true | false). BackupIgnoredDirsEmpty=true # # CompressWithBZip2=false -> no compression. Value (true | false). CompressWithBZip2=true # # Compress Files > 100Mb. Only valid if CompressWithBZip2=true. Value (true | false). CompressBigFiles=true # # Value (true | false). VerboseMode=false # # Value (true | false). MakeSlices=true # # StopAfterSlices: Only valid if MakeSlices=true. Value (true | false). StopAfterSlices=false # # SizeOfDarStatic: dar_static + DocFiles + Restore shell + etc (To calculate first slize size). SizeOfDarStatic=4 # SliceSize=650 # ######################################################### # BACKUP SETUP. END. Read Dar Doc before modification. ######################################################### ######################################################### # SUBROUTINES. BEGIN. ######################################################### echoE() { # echo to standard error. Remove leading/trailing blanks and double spaces. echo $* 1>&2 return 0 } Usage() { echoE "$CommandName creates (Using '$DarName'), in directory" echoE "'$DestinationDir'," echoE "a backup of all files and directories in" echoE "'$OriginDir'." echoE "It analyzes current backup files and recommends the most suitable new" echoE "backup level to the user. It also creates/updates a database with backup" echoE "information for future Backup management (Using '$DarManagerName')." echoE echoE "The backup will be split in files of $SliceSize Mb to fit in removable media." echoE echoE "Usage: $CommandName. (User can choose backup level)." echoE "or" echoE "Usage: $CommandName -auto. ($CommandName selects backup level automatically)." echoE return 0 } UmountDirs () { if [ "$DirsToUMount" != "" ] then echo "############" echo "$CommandName: Unmounting file systems:" for i in $DirsToUMount do mount | grep -w $i &> /dev/null if [ $? -eq 0 ] then if (umount $i &> /dev/null) then echo "$CommandName: $i unmounted." else echoE "$CommandName: $i could not be unmounted." fi else echo "$CommandName: $i was already unmounted." fi done fi echo "############" return 0 } TwoDigits () { #Add leftmost 0 if [ $1 -lt 10 ] then echo 0$1 else echo $1 fi return 0 } Stream() { # Output String(s) without letting the Shell interpret metacharacters. # Remove leading/trailing blanks and double spaces. # Enclose arguments in "" when calling. I.E.: Stream "$Var1 $Var2" TempStr=$@ Length=${#TempStr} if [ $Length -eq 0 ] then return else CharNum=0 while [ $CharNum -lt $Length ] do echo -n "${TempStr:$CharNum:1}" let CharNum++ done echo fi return } ######################################################### # SUBROUTINES. END. ######################################################### NoUserChoice=false if [ $# -ne 0 ] then if [ "$1" == "-auto" ] then NoUserChoice=true else Usage exit 1 fi fi if $CheckRunLevel then RunLevel=`runlevel | sed 's/.* //'` if [ $RunLevel != S ] then echoE "$CommandName: RunLevel: $RunLevel. Please change to RunLevel 1 (init 1) and try again." exit 1 fi fi if $CheckUser then CurrentUser=`whoami` if [ "$CurrentUser" != "root" ] then echoE "$CommandName: User: '$CurrentUser'. Please login as 'root' and try again." exit 1 fi fi echo "############" DirsToUMount="" if [ "$DirsToMount" != "" ] then echo "$CommandName: Mounting file systems:" for i in $DirsToMount do mount | grep -w $i &> /dev/null if [ $? -ne 0 ] then if (mount $i &> /dev/null) then echo "$CommandName: $i mounted." DirsToUMount=" $i"$DirsToUMount else echoE "$CommandName: $i could not be mounted. Aborting." UmountDirs exit 2 fi else echo "$CommandName: $i was already mounted." fi done echo "############" fi if [ "$OriginDir" != "/" ] then # if first character is not '/'. if [ "${OriginDir:0:1}" != "/" ] then echoE "$CommandName: 'Origin' directory:" echoE "$CommandName: $OriginDir." echoE "$CommandName: Must be an absolute path (Should start with '/'!)." echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 else # if last character is '/'. if [ "${OriginDir:${#OriginDir}-1:1}" == "/" ] then echoE "$CommandName: 'Origin' directory:" echoE "$CommandName: $OriginDir." echoE "$CommandName: Should not end with '/'!." echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 else if test ! -d $OriginDir then echoE "$CommandName: 'Origin' directory:" echoE "$CommandName: $OriginDir." echoE "$CommandName: Does not exist. Please edit '$CommandName' and try again." UmountDirs exit 3 fi fi fi fi if [ "$DestinationDir" != "/" ] then # if first character is not '/'. if [ "${DestinationDir:0:1}" != "/" ] then echoE "$CommandName: 'DestinationDir' directory:" echoE "$CommandName: $DestinationDir." echoE "$CommandName: Must be an absolute path (Should start with '/'!)." echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 else # if last character is '/'. if [ "${DestinationDir:${#DestinationDir}-1:1}" == "/" ] then echoE "$CommandName: 'DestinationDir' directory:" echoE "$CommandName: $DestinationDir." echoE "$CommandName: Should not end with '/'!." echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 else if test ! -d $DestinationDir then echoE "$CommandName: 'DestinationDir' directory:" echoE "$CommandName: $DestinationDir." echoE "$CommandName: Does not exist. Please edit '$CommandName' and try again." UmountDirs exit 3 fi fi fi fi if [ $OriginDir == $DestinationDir ] then echoE "$CommandName: 'DestinationDir' and 'OriginDir' can not be the same directory!" echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 fi # Find dar & dar_manager if type >/dev/null 2>&1 $DarName then DarFound=true else DarFound=false fi if type >/dev/null 2>&1 $DarManagerName then DarManagerFound=true else DarManagerFound=false fi if ! ($DarFound && $DarManagerFound) then if [ "$DarDir" != "/" ] then # if first character is not '/'. if [ "${DarDir:0:1}" != "/" ] then echoE "$CommandName: 'DarDir' directory:" echoE "$CommandName: $DarDir." echoE "$CommandName: Must be an absolute path (Should start with '/'!)." echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 else # if last character is '/'. if [ "${DarDir:${#DarDir}-1:1}" == "/" ] then echoE "$CommandName: 'DarDir' directory:" echoE "$CommandName: $DarDir." echoE "$CommandName: Should not end with '/'!." echoE "$CommandName: Please edit '$CommandName' and try again." UmountDirs exit 3 else if test ! -d $DarDir then echoE "$CommandName: 'DarDir' directory:" echoE "$CommandName: $DarDir." echoE "$CommandName: Does not exist. Please edit '$CommandName' and try again." UmountDirs exit 3 fi fi fi fi # Include directory, where dar usually resides, in PATH." # DarDir not in PATH? echo $PATH | grep $DarDir &> /dev/null if [ $? -ne 0 ] then PATH=$DarDir":"$PATH fi fi if ! type >/dev/null 2>&1 $DarName then echoE "$CommandName: $DarName neither in PATH nor in $DarDir. Aborting." UmountDirs exit 3 fi if ! type >/dev/null 2>&1 $DarManagerName then echoE "$CommandName: $DarManagerName neither in PATH nor in $DarDir. Aborting." UmountDirs exit 3 fi ######################################################### # VARIABLES INITIALIZATION. BEGIN. ######################################################### # Backup Paths. ############### #Backup base names & DataBase name. FullBackupBaseName=$CommandName"Full" DiffBackupBaseName=$CommandName"Diff" DataBaseName=$CommandName"DataBase" # FullBackupPath=$DestinationDir/$FullBackupBaseName DiffBackupPath=$DestinationDir/$DiffBackupBaseName DataBasePath=$DestinationDir/$DataBaseName # ######################################################### # Set dar options. ############### # Backup base name (Will be set later): -c PathBackUpBaseName BackupNameOption="-c " # # Reference backup (Will be set later) for differential backups: -A PathBackUpBaseName ReferenceBackupOption="-A " # # Origin of Backup: -R /. DarOptions="-R "$OriginDir # # Compress data inside the backup using bzip2: -y[CompressLevel]. # CompressLevel: 0 minimum; 9 maximun. Compress Files > 100Mb: -m 0. if $CompressWithBZip2 then DarOptions=$DarOptions" -y9" if $CompressBigFiles then DarOptions=$DarOptions" -m 0" fi fi # # Verbose mode: -v if $VerboseMode then DarOptions=$DarOptions" -v" fi # # Create empty sub-directories in backup instead of those not saved: -D if $BackupIgnoredDirsEmpty then DarOptions=$DarOptions" -D" fi # # Do not read ~/.darrc nor /etc/darrc configuration file: -N DarOptions=$DarOptions" -N" # ######################################################### #Set Slice options. ############### if [ $SliceSize -gt $SizeOfDarStatic ] then let FirstSliceSize=$SliceSize-$SizeOfDarStatic else FirstSliceSize=$SliceSize fi # # All sizes in Mb; Stop after each slize. if $MakeSlices then FirstSliceSizeOption="-S "$FirstSliceSize"M" SliceSizeOption="-s "$SliceSize"M" # Pause between slices to change removable media. Ring bell: -p -b if $StopAfterSlices then DarOptions=$DarOptions" -p -b" fi else FirstSliceSizeOption="" SliceSizeOption="" fi # ######################################################### #Set Include/Exclude Files Options. ############### # Files you don't want to backup: -X "*~" -X ".*~" if [ "$FilesToIgnore" != "" ] then InclExclFilesOption='-X "'`Stream "$FilesToIgnore" | sed 's/ /" -X "/g'`'"' else InclExclFilesOption="" fi # # Files you want to backup without compression: -Z "*.zip" if $CompressWithBZip2 then if [ "$FilesNotToCompress" != "" ] then InclExclFilesOption=$InclExclFilesOption' -Z "'`Stream "$FilesNotToCompress" | sed 's/ /" -Z "/g'`'"' fi fi # # Files to include in backup: -I "*.html". if [ "$FilesToBackup" != "" ] then InclExclFilesOption=' -I "'`Stream "$FilesToBackup" | sed 's/ /" -I "/g'`'" '$InclExclFilesOption fi # ######################################################### #Set Include/Exclude directories Options. ############### # $OriginDir in $DestinationDir? echo $DestinationDir | grep $OriginDir &> /dev/null if [ $? -eq 0 ] then # TempDir= $DestinationDir-$OriginDir TempDir=`echo $DestinationDir | sed s%$OriginDir%%` if $IgnoreBaseOfDestinationDir then # Include BaseDir of DestinationDir (Without first '/') in SubDirsToIgnore. # if first character, in TempDir, is not '/'. if [ "${DestinationDir:0:1}" != "/" ] then # Add '/' in front. TempDir="/"$TempDir fi TempPath=$TempDir while [ $TempPath != `dirname $TempPath` ] do BasePath=$TempPath TempPath=`dirname $TempPath` done BasePath=`basename $BasePath` if [ "$SubDirsToIgnore" != "" ] then SubDirsToIgnore=$SubDirsToIgnore" $BasePath" else SubDirsToIgnore=$BasePath fi else # Include DestinationDir (Without first '/') in SubDirsToIgnore. # if first character, in TempDir, is '/'. if [ "${TempDir:0:1}" == "/" ] then # Remove first '/'. TempDir=${TempDir:1:${#TempDir}-1} fi if [ "$SubDirsToIgnore" != "" ] then SubDirsToIgnore=$SubDirsToIgnore" $TempDir" else SubDirsToIgnore=$TempDir fi fi fi # # Sub-trees you must not save: -P dev/pts -P proc. Path must be relative to -R option # Enclose each directory in "" just in case there are metacharacters in the name. if [ "$SubDirsToIgnore" != "" ] then IncludeExclDirsOption='-P "'`Stream "$SubDirsToIgnore" | sed 's/ /" -P "/g'`'"' else IncludeExclDirsOption="" fi # # Sub-trees you must save: Add without any option in front. # Enclose each directory in "" just in case there are metacharacters in the name. if [ "$SubDirsToBackup" != "" ] then IncludeExclDirsOption='-g"'`Stream "$SubDirsToBackup" | sed 's/ /" -g "/g'`'" '$IncludeExclDirsOption fi # ######################################################### # Set dar_manager options. ############### # Create DataBase: -C PathBaseName CreateDataBaseOption="-C "$DataBasePath # # DataBase used as reference: -B PathBaseName DataBaseNameOption="-B "$DataBasePath # # Add Archive to DataBase (Will be set later): -A PathArchiveName AddToDataBaseOption="-A " # ######################################################### # VARIABLES INITIALIZATION. END. ######################################################### FullDiffBackupSize=`ls -1 -s --block-size=$BlockSize $FullBackupPath.* 2> /dev/null | awk '{s = s + $1} END {print s}'` if [ "$FullDiffBackupSize" == "" ] then FullDiffBackupSize=0 fi TotalDiffBackupSize=`ls -1 -s --block-size=$BlockSize $DiffBackupPath??.* 2> /dev/null | awk '{s = s + $1} END {print s}'` if [ "$TotalDiffBackupSize" == "" ] then TotalDiffBackupSize=0 fi echo "$CommandName: ### `date --rfc-822` ###" echo "$CommandName: Current backup information (Size in $BlockSize bytes blocks.):" if [ $FullDiffBackupSize -eq 0 ] then echo "$CommandName: No $FullBackupBaseName files found!" echo "############" echo "$CommandName: Preparing to Create $FullBackupBaseName." DiffBackupNr=0 LastDiffBackup=$DiffBackupNr else echo "$CommandName: ..$FullBackupBaseName: $FullDiffBackupSize." if [ $TotalDiffBackupSize -eq 0 ] then DiffBackupNr=1 LastDiffBackup=0 BaseName=$DiffBackupBaseName`TwoDigits $DiffBackupNr` echo "############" echo "$CommandName: Preparing to Create $BaseName." else echo "$CommandName: ..$DiffBackupBaseName: $TotalDiffBackupSize:" DiffBackupNr=0 LastDiffBackup=$DiffBackupNr BestChoiceDiffLevel="" RemainingDiffSize=$TotalDiffBackupSize CurrentSize=1 while [ $CurrentSize -ne 0 ] do let DiffBackupNr++ BaseName=$DiffBackupPath`TwoDigits $DiffBackupNr` CurrentSize=`ls -1 -s --block-size=$BlockSize $BaseName.* 2> /dev/null | awk '{s = s + $1} END {print s}'` if [ "$CurrentSize" == "" ] then CurrentSize=0 fi if [ $CurrentSize -ne 0 ] then LastDiffBackup=$DiffBackupNr let RemainingDiffSize=$RemainingDiffSize-$CurrentSize if [ "$BestChoiceDiffLevel" == "" ] && [ $CurrentSize -lt $RemainingDiffSize ] then BestChoiceDiffLevel=$DiffBackupNr fi BaseName=$DiffBackupBaseName`TwoDigits $DiffBackupNr` echo "$CommandName: ....$BaseName: $CurrentSize." fi done echo "############" let NextDiffBackup=$LastDiffBackup+1 if [ "$BestChoiceDiffLevel" == "" ] then BestChoiceDiffLevel=$NextDiffBackup fi Choice[4]="Exit $CommandName." let MinDiffBackupSize=$FullDiffBackupSize*$MinDiffPercentOfFullBackup/100 if [ $TotalDiffBackupSize -lt $MinDiffBackupSize ] then BestChoiceDiffLevel=1 Choice[1]=" ($DiffBackupBaseName<$MinDiffPercentOfFullBackup%$FullBackupBaseName)." fi if [ $LastDiffBackup -gt $MaxNrOfDiffBackups ] then BestChoiceDiffLevel=1 Choice[1]=${Choice[1]}" (NrOfDiffBackups>$MaxNrOfDiffBackups)." fi BaseName=$DiffBackupBaseName`TwoDigits $BestChoiceDiffLevel` Choice[1]=" $BaseName."${Choice[1]} BaseName=$DiffBackupBaseName`TwoDigits $NextDiffBackup` Choice[2]="Create $BaseName. Faster." Choice[3]="Rewrite $FullBackupBaseName ($DiffBackupBaseName>$MaxDiffPercentOfFullBackup%$FullBackupBaseName). Recommended!" let MaxDiffBackupSize=$FullDiffBackupSize*$MaxDiffPercentOfFullBackup/100 if [ $NextDiffBackup -eq $BestChoiceDiffLevel ] then if [ $TotalDiffBackupSize -gt $MaxDiffBackupSize ] then Choices="1 3" CreateRewriteMode="Create" Choice[1]=${Choice[1]}" Faster." else Choices="" fi else CreateRewriteMode="Rewrite" if [ $TotalDiffBackupSize -gt $MaxDiffBackupSize ] then Choices="1 2 3" else Choices="1 2" Choice[1]=${Choice[1]}" Recommended!" fi fi Choice[1]=$CreateRewriteMode${Choice[1]} if [ "$Choices" == "" ] then DiffBackupNr=$BestChoiceDiffLevel BaseName=$DiffBackupBaseName`TwoDigits $DiffBackupNr` echo "$CommandName: Preparing to Create $BaseName." else Choices=$Choices" 4" echo "$CommandName: Options:" ChoiceNr=1 for i in $Choices do echo "$CommandName: $ChoiceNr.${Choice[$i]}" let ChoiceNr++ done echo "############" if $NoUserChoice then echo $Choices | grep "3" &> /dev/null if [ $? -eq 0 ] then Choice=3 else Choice=1 fi else let ChoiceNr-- ValidNumber=false until $ValidNumber do read -p "$CommandName: Please choose a number: " UserChoice case $UserChoice in [a-zA-Z-_.,]* | *[a-zA-Z-_.,] | *[a-zA-Z-_.,]*) echoE "$CommandName: No alpha characters allowed. Please try again.";; "") ;; *) ValidNumber=true;; esac if $ValidNumber then if [ $UserChoice -lt 1 ] || [ $UserChoice -gt $ChoiceNr ] then echoE "$CommandName: Allowed number range: 1..$ChoiceNr. Please try again." ValidNumber=false fi fi done ChoiceNr=0 for i in $Choices do let ChoiceNr++ if [ $ChoiceNr -eq $UserChoice ] then Choice=$i fi done echo "############" fi case $Choice in 1) DiffBackupNr=$BestChoiceDiffLevel BaseName=$DiffBackupBaseName`TwoDigits $DiffBackupNr` echo "$CommandName: Preparing to $CreateRewriteMode $BaseName.";; 2) DiffBackupNr=$NextDiffBackup BaseName=$DiffBackupBaseName`TwoDigits $DiffBackupNr` echo "$CommandName: Preparing to Create $BaseName.";; 3) echo "$CommandName: Preparing to Rewrite $FullBackupBaseName." DiffBackupNr=0;; 4) echoE "$CommandName: Program exits at user request." UmountDirs exit 4;; *) echoE "$CommandName:Warning: Incorrect choice: $Choice. Aborting." UmountDirs exit 5;; esac fi fi fi if [ $DiffBackupNr -eq 0 ] then FullBackup=true else FullBackup=false fi if $FullBackup then BackupNameOption=$BackupNameOption$FullBackupPath if [ $FullDiffBackupSize -ne 0 ] then echo "############" echo "$CommandName: Removing previous $FullBackupBaseName files." for i in $FullBackupPath.* do if test -f $i then if (rm $i) then echo "$CommandName: $i removed." else echoE "$CommandName:Warning: Failure to remove $i." fi fi done fi if [ $TotalDiffBackupSize -ne 0 ] then echo "############" echo "$CommandName: Removing previous $DiffBackupBaseName files." for i in $DiffBackupPath??.* do if test -f $i then if (rm $i) then echo "$CommandName: $i removed." else echoE "$CommandName:Warning: Failure to remove $i." fi fi done fi if test -f $DataBasePath then echo "############" echo "$CommandName: Removing previous $DataBaseName file." if (rm $DataBasePath) then echo "$CommandName: $DataBasePath removed." else echoE "$CommandName:Warning: Failure to remove $DataBasePath." fi fi echo "############" echo "$CommandName: creating $FullBackupBaseName. Please wait." echo "###" sh <&1 | awk 'END {print $1}'` case $LastArchiveInDB in [a-zA-Z-_.,]* | *[a-zA-Z-_.,] | *[a-zA-Z-_.,]*) # If DataBase empty last line produced by 'dar_manager -l' is full of '--' echoE "$CommandName: Warning: $DataBaseName is empty. Aborting." UmountDirs exit 6;; *) if [ $LastArchiveInDB -gt $DataBaseLastValidArchive ] then echo "############" echo "$CommandName: Erasing previous Differential backups from $DataBaseName." while [ $LastArchiveInDB -gt $DataBaseLastValidArchive ] do let ArchiveBaseName=$LastArchiveInDB-1 BaseName=`TwoDigits $ArchiveBaseName` ArchiveBaseName=$DiffBackupBaseName$BaseName if ($DarManagerName $DataBaseNameOption -D $LastArchiveInDB) then echo "$CommandName: Archive $LastArchiveInDB ($ArchiveBaseName) erased from $DataBaseName." else echoE "$CommandName: Warning: Erasing of Archive $LastArchiveInDB ($ArchiveBaseName)\ from $DataBaseName failed." fi let LastArchiveInDB-- done fi;; esac else echoE "$CommandName: Warning! $DataBaseName does not exist. Aborting." UmountDirs exit 7 fi echo "############" echo "$CommandName: creating $BackupName. Please wait." echo "###" sh < it must be included with the option -B /etc/darrc to dar ### ### Files that shall not be compressed (because they're already) ### # archives (Note: .tar is archive, but not compressed => do compress it). -Z "*.bz2" -Z "*.deb" -Z "*.gz" -Z '*.zst' -Z "*.Z" -Z "*.zip" -Z "*.rar" -Z "*.tbz2" -Z "*.tgz" -Z "*.jar" -Z "*.ear" -Z "*.war" -Z "*.BZ2" -Z "*.DEB" -Z "*.GZ" -Z '*.ZST' -Z "*.Z" -Z "*.ZIP" -Z "*.RAR" -Z "*.TBZ2" -Z "*.TGZ" -Z "*.JAR" -Z "*.EAR" -Z "*.WAR" # media - images -Z "*.gif" -Z "*.jpeg" -Z "*.jpg" -Z "*.png" -Z "*.GIF" -Z "*.PNG" -Z "*.JPEG" -Z "*.JPG" # media - audio -Z "*.ogg" -Z "*.mp3" -Z "*.OGG" -Z "*.MP3" # media - video -Z "*.avi" -Z "*.mov" -Z "*.mp4" -Z "*.mpg" -Z "*.AVI" -Z "*.MOV" -Z "*.MP4" -Z "*.MPG" # documents - compressed formats -Z "*.pdf" -Z "*.swf" -Z "*.sxw" -Z "*.PDF" -Z "*.SWF" -Z "*.SXW" # strange formats, binaries and other hard to compress (empirical) -Z "*.gpg" -Z "*.rnd" -Z "*.scm" -Z "*.svgz" -Z "*.vlt" -Z "*.zargo" -Z "*.wings" -Z "*.xpi" -Z "*.chm" -Z "*.GPG" -Z "*.RND" -Z "*.SCM" -Z "*.SVGZ" -Z "*.VLT" -Z "*.ZARGO" -Z "*.WINGS" -Z "*.XPI" -Z "*.CHM" -Z "*.obj" -Z "*.tif" -Z "*.tiff" -Z "*.OBJ" -Z "*.TIF" -Z "*.TIFF" ### ### Ignored files ### -X "*~" -X "*.o" # *~ are backups, *.o are compiled unlinked files dar-2.7.17/doc/samples/cluster_digital_readme.txt0000644000175000017520000000057614041360213016750 00000000000000What follows is an extract from several email exchange with Roi Rodriguez. Denis. ---- "[the] remote copy feature needs to use a ssh authentication method which doesn't prompt for a password (in order to make it non-interactive, so you can run it from cron. It's no needed if someone plan to run it by hand, of course). I've added this comment at the beginning of the script." ----dar-2.7.17/doc/samples/README0000644000175000017520000000011014041360213012354 00000000000000Please point your web browser to the index.html page of this directory. dar-2.7.17/doc/samples/pause_every_n_slice.duc0000755000175000017520000000200114041360213016220 00000000000000#!/bin/sh ############################################################################ # WARNING: this script is now obsolete, due to the -p option new feature # that provide the same behavior. You can still use this script or tune it # for your own need if you like, this is why it is kept present here as sample ############################################################################ # This script is to be launched on dar command line when creating an archive # with -s option (slicing), in place of -p option (pause every slice done) # # -E "pause_every_n_slice.duc %p %b %n %e %c N" # # will make dar pause when slice N will be done, when slice 2*N, 3*N etc. # will be done. # if [ "$1" = "" -a "$2" = "" -a "$3" = "" -a "$4" = "" -a "$6" = "" ]; then echo "usage: $0 " exit 1 fi toto=$(( $3%$6 )) if [ $toto -eq 0 ] ; then echo "Pausing after slice $3" echo "Press return to continue" read junk fi dar-2.7.17/doc/samples/MyBackup.sh.tar.gz0000644000175000017520000047340514041360213014774 00000000000000‹¦°/Jìî®¬Ö ì³[oÜQãÕ‡“³ƒóO|»O÷?Þ×k˜ÛÃjs¼[·°\°À€€•€õX£zIrìTÂ!x÷uðÚa˜Ö H½¯ÒRé¾Áúu/J†vat%ïê`YиCÒïWq8T®DdyBk¢ñùsÆ•,‚ëà‡ƒ«QâgèîF$¾7þøô­iJ¬R»ÿÎ ‚ WÜl¤Ž»&ïÜwퟟ¥®q'aÜÿ ’ÖÊîªU¤¶#…ä{ðé“‚ »»`­Á?>‡l B¦í1ÊfÑeTé1<¿v_S¾J€õªÊ •ûZí?mVÿ5׫èýoŽã3þ¿Óy²Éþs§ó¤Õy‚þ¿ÕÞÞþÓÿÿ×¼òƒ"Ð1¢·B·O §¬Z 'K1>€Íf«¶T[‚‹÷!¿Šc³Ÿò8|C.öÂ1¶…}°=„ 6,öðûa”·©{]†?åQ&\Èì^ vèBÛŽÀÛDèÎ ng`ã %ØCÄvbgQ’`‡q6†À'GïìÌV¯ö€ad>:‰ÍRÕ‹k1&$/»/\óbÛOR æÈÕìD¡ùaªÈ]?C i="t‚(E`~)rÑð$Kr ,Ř(ÄÅýp®Ó|¸ÇöP¬ÃÅ8Æ¿Çb$OF! aàÇ€²9ÉÐOI$%àר^—£ÅS˜^aÔ·_}ë°…žßÏ‘ üx9ùP„?2¶$ê'öp¦‚°è^Dô ÙÙO¢<~¹Îj%nmÂ\ë’mñŸB}P%x)Ò!* }¬ÕAq¥ ð7ÝohÁŽc挼·Ã1RY«QÏ®YUçVs.SÔÌ7¹ïŠfìz&a&žu'gO«–’K÷)…¨¤nQë ò©¹"˜.:=‰»73Ê.0NŠce{ûIëéÓÎö‚(ŸHâ{ºÕEqº¡@¯¤;wsksÝð“í' "ÞÖsMYºMÇë&ÔkM§½ó´óìY«õýO»ÌÿS®ÿMGIã·Âñ™õŸ2Àbýï´:¸þ·w¶Ú®ÿ¿ÇÕÞêlmn>ÛÙzò§]ý¼ŽBÔÿ øMq†_> ÏN;ÄÄ(òÄûóý·8fïÕÑ»£‹Ö¿>º8>{,+d¥+2ÛR=÷(Þé \™z%Âþ Rgc?/A†b¦’R+Ì|Nõ¦0ÊÖAVܲè¡ly|)ßu8 æ:l?ƒ ÁYÈi€)XpžˆÍÍÖ:¼ŠÒŒº¾ßƒV}‘ÕÞlí¬Ãå9©ük ÜCŠsA¹ÎZœô©Ð·kbRaR—Ó<‰1ìBmÉPR¦¬|cæxm÷)ù{î;לýº~‚ò鉓±¡éŠª%ÒÿBGå{>¥µˆâ2űŒ€”ÓÓÊ*xyèpˆÝA4" 'y(ùf ”¢âÃÛX8YJ(<Ñb:›a¾Ãö”‡D ¨Bk'ÖD/Ùìÿ›ÖÉ“»µs².Â4f§8ôCLõSDšô¬Ù‡·Õ,eòqF¢ö8²2û¨ȧÃðÆO¢Ò9¢ò­$@[s™Ý˜”“ "¿ò^f9R'BÅ‘uƧ¢p£Å8$™ŠòNÙø(­efÞñv¥A·V€æõÙRë¿Ök5Cíb¶aÌC³²ƒI›Ùiz%È3]eÑ-)~˜ ÀeØ `#Í¥·LxFäEKS ÒJ½†Ær7å.»Ö=|‹ˆÑ²'Pš]ЮÑo‡ÖŒbw´v°DŠÖð3^P˜Jw+¢ä€ zS{D)¬`ÇU½å£¢ÑѤϵ󔼒 gÖqÝœ 8Y‘ÀÌh’–ä3žÚ¢ŒìqŠ! À‰Z!ÖÁ„T9ùæ‚ÂçÉ‘è ózãîõåñþñÞûÃOЂïë`R)×k â y&†‘¥ha»¼®$òÅô«ƒZžVJ½ÿgá«Fa1éÞn½qúá`Ÿê5ÑâóŠ6ùzCuªw½:/¿k¯b'Tž:9ü:« é |÷]©â¤¹õé½=ä(‘h5á“ø‹ }!Õa_NQ…a1Ä3Þ(^=ä"¶”ŽÒÁá‡ïNNß_t!½öã˜–Ô P€à—æÉ[ê3±SyÎ@Σš{C‚*e fÚø X¸æµfΗԒaAÉH35'^à@hCi¸â[:1StY^†µû‡ýŠ' QŠ@íl–ŒX@@tKŸSoÌÀXGo÷àEE½ê&E1Áô“¿!"¤b¢Û´ì)‡'ñjáäï¿/WfQ¸ZþCšÒðhñ4LÜ|§:!bûÐEÔãs~VzÎE½¬ò›…›åÒÆ)±™Q–VÖ†´KåæÂ#ëb†j©BJŠŽ–«ñö/ÅʘœùaJ3U~ãiD±ZуRèãÉfåãµ+øÄ¤^q%äݾ—/ÊA/›Í&üˆ1Æö A¦EK¦§ƒgzGé¹+=ô½×r&ê4Â*†°Âw´®¸°¢V‹ñ*­PÊ!3or«zòWŒÏù¶ ¼s­7 “Û‰¥(ÙÇpYŒÆ¨ç‰/R•Õ9•?¹‡ ³ßQ4T0!Á£`Ú1ÖqZÿI„ùJ(¤”0…õƒtáÕ‰ù!#(–ÿ!If7Š#Ä.ÙsIN©Pd»j¸ØP»²+wÉÜŠÃ¥{°úû3~åFò4{—Æ]{¹»v/[hQÀ†¥µî}M •¨!…oº­n›|udªu{»lO¾—Õ佊‚˜ô]hȉV õáónõFœôQ[ äxC‰^ÊžReYÒ!{b&C6ëðò»(pÊ”Ûz~Š’%8’FÀ «‘…9r<¨l@Ñý_ä >½ØÐ¼’§^q²O9æoÁê)^T#?èΈ³ê°bÛ…odIÿ}ùŒzýÀ ˜4®E++Z+äÏ÷¸X¬Ê·:Âe:‘}xâÅG" oƼ¯?:‚‚y· ûW$ÞA¼ûñE€úKDP þJįñ’úyÖá‡Ðw w¡È¦[³²Ns3¦sãW—op})»5ÚøØá®òs†2¹Qð¿Â)•i ¶Ü—(¾®\`f¦W¤F‹é[Ql­Ô°0@Çè09”ë *Nå|l™2“¥­‹ow_V=âr-þKa8¾Rñ»6/cªò·Ë5?Y 9ºº%+ý³ªJ²jÑ>Ÿ!ðû:»Á“¤­]/-K­¼ÏV¶É(™ARŒ‚¸]þ†üjèZ-œ[¥&JIB£œLEÕK 6«»Òç²3f´Ø¯Ýy?C{³ÅÛôwkSëÊôŽyh„ ‹h7ÏÞ¼½<5e‹ ˆ ‚7ÕçèøbºO{ºÏß/¦;ibÊN‡gï':I:­Û“\Vµ-9DOt|€¶Kƒ¨¯Å3§`Xp\îç]l6V¹ 4£ò‚­7ê,j‚¸.ùG€ióXï…®«ÄYøö׌çy µ?ªúÆVR>ϯjw¹s­î{•î5%‡ŠÎU”G-ØÅvB;ÍN4¤Ãû,’Å|m‰æËŠõv‚n–|®à £Üm+vTg2Ç{» ó¡iÈk öÎ÷Ž -Z»yÝ´!#ž…!·Iwë¦49kWxvdª0Gèk‹‘$´ÜCo<ÁfJêËÿpÇëÓ>õn}CdÙhÄ×ä½7PUÝÈQ/ó4ÙHh½ØVíBœÚ5Mâ „¼š„o6ø3NÝa öŸ3R2ç”Ô`R æ8<’öZ×¢(B&z¸\ó×§N÷ ë‚nÜýá¦K;KÐyÉ>†yÐ\‹"KÄç Ø;K7®7ÚNN/öÎÞp>È^eÜÎó‰uvŸ'¼u–SæÑÒ˜ Æ„÷ò ZÕ>>;{¦«LSv^^^[ÙX½ß¨O¤aè´è,Y†¡§r²Å¦;®Ï>G½i@ç2 àÄæJ½¡$_×ÛòñJÜp'â â„Ö“bQR ÙÁ¯gÑÁ•EJÿ5‹Ôã<Ö2Í"WEáWÈ·õ–àuÃÒû¶*Ý‚_ÏçàÊgå;4ŸÕã<>žfšÏñ¯gQüGe‘òžšEêq‹ _;Í¢”²M Šs~ uµ>Go~=ƒoþ¨ V«f°zœÇàbÍšfðšfp9õ#5_y˜'Œ7f]×C Y9Áí‚r/ùy#’[„xl‰ÇH‚¶B¸LŒ<:>°dX im:üÕÌšAà$ÞŒÞÍÆš©÷8i£‹ÎurÐ)OÚÍ †jÂð0øVšG/e‘IÄþ¯x’ʧ²@ÂóÎó[UX{,sìl×äðΤ-ïaäêºç¶hmxoª-{l£H‡Ûé™ÚcÝ1Æ'ׯ¨M>Ó-Å=©zR|^“VȾǶX÷ˆ©Æîê1¥ÐÓH³l¬ðÖ¬U‹)ÕºìTašyq¨¢Øò£mû®w‹¿4AyÇÇ#ø.æ"—o‚”˜:¾‹e’#‘ º™Úò¯žV˜¦®;Ò7>ØÌùë´ÈôŽ=jVÓ.i|²ÎX71à˜Ž‚¿¤Ç×é :oÓÛO°xðËö…i)•õB×NÜ:í l 3T¹×Ú8=oó-! _óp"’õÈ5\- Ñ+äXÉ.Cò9JbÑÚÅÅš¥¼múÒÇ/þPåøÄ,«]Øs^?íxªe±)æ_†ÌcœŽ+$ƒ¸pS2¯57âT×0G¶>Ö£÷– ye¥’[¸T/ù³ @fèðéŒÏG¿Hï.&õn%×t[Ò¨Š|Fë*i˜¬†«O*'òûãÊÖÔ/Ì¿jF>´Ók ÿ»‚Q•)k ªs@ËT«™ëÄm/¢Óíü ”vΨCÇÑ,•|i ³}´Šé:‡<Â9yVScA³Õ·Uëìä¼<#-+"Øúk&‚®(š‚]Œä1ÜŸËô¼GP=ª?]²™* È@d"ï6”eš8÷wï7?M]·á|þ3õXaöëâª+{Œ­9òŽëz}y@\냻jÅ õ¹’>½>iÒ ©baVÓ>fæ‘щҗy<³YX<ƒ÷èaÏ… †Öcyj‘³U³KS•bÓBÅ©©3È}ù5—²*æíÙNåÐ*ñçFÙý/{_E±­j‚ Š‚xšIÂ$!3ÙI$aß·8ÉL’ÉL˜%!@pA6Q‘UD@A@Ùd_TDÜпΩªîêžžd@î}÷ýÏÜ+Iw×^§ªNå;TuNÉ1Î3ÎKÐÚRåy ‡E­ÐijÂK)$Oȉäs>ÇC…\ÄÀDÈ;ðïkbtžrà&ÌÈ‘äGO&ñÇkEж•…&SJ{ ¬ÅªáÈ7`\–\TÇú ÆÉ‰‘@Y ø²Ðý7ÇIA]ÐUn•9c"è%-Œž¾èŠ9ÚËÍÏdklfVEŸ•Œ17ãEû¨HØ~<°ÓãÐE'T¬¶%´©Á¨zÖÚN§+Ö7.ò7XxÙœò£.Ä8€¾ ÑQvñ©æ9ðhÙv«ª‘—ïàiüÜkøàË6ø—¸s^%øhà.ÆÀà‘SXöË«z EX<¾;Ñ[5È)ÂŽ)]¶ÀLìgyýwªtcå~ÃØHÆ=’Ñ¥³¶Ì6›j7LñÆ’¼ Ç€ŒX[¹åJs¶ž¼ãµëÍ…O¾×”‹ȬUX¹Fÿ›‹¬ ŽC{í1È9*~˜‚ÆïŠˆ:T_ñ`ð[}ºa„ÍÈ—€f›ËÖA²ýÏM¦£%÷¸êóf©¤€Y½{÷è­“/AÌ'Ó8rzð¤±g÷c¿‚ö à%òb¥n;xzBR “â½%5›Í™Ÿ†=ªÜ ÜBÐ8îÄdê]Á1 \´@•HG¬Šf€¿Å€# ÔÓ †Éë­ÐºÉ‘»»r åãy ÂhêŠÝF0lö”ÖƒÙ:ÙîŽ^‘X<ôË(X½è'²;y= ›Ëê—Ú° ÛJm ¯¿-I©~ìwïVø9dY†..¸9á•_‘:×m+%·"ÎßxÝÀ*!ÃY¼$EÒ©ÌÒN%BY ò؆†à ³ÂgcÇG*be„“ªç?4—α i&D$(@lV ua¶x:Áo¬äâof7$ â¼ä|Ò–‘ª3?„S©±#í ðß„2bþŽÇ §Ã ˆÖǘyíJ¦Ê®+r½¤Y‚³ž¢죄Ë/ùCLR•³ÚÝm+É£•\ÔLÅðCjg |€ÇR© É#?ö‡ʤ6e7}ÇvG®¾E’͈U"ªH²þ9¶¢b›£4Šee'(M&É6·- ·UÒHêrŽ^ô*Ø;ÅÁÑ_Wb–2™àJVä H±f.X§–Žj;ó3…\àÎÉcU¡Ùµá€Ø¤«l‘UBoÅÞ“‚¿1&(æ4†Rt*#‚;#KâzÅæ=˜¾pó±`Üp•=)ZÁÁ}‚lɵšÞþÊX)màÂ0í¨• í•áaâoŒßU´œ-¡`ïôbº1'²]]ØÂ¨;Ü¿IÊÁ ÒYµ‹ÉÌñ+màf'hÁÖ)´j~3Ôcó‚’.f£ùƒÓå#§¹7ÔS ·q“twpõ!=¥($ä™tì¾-^r¶9©Ã#ÊbJmýVR7 ,:=þm ¥bq0ÏRËw.²\Ì —PîÊO<`MWi©ÀÁ ,°' ¹™©ƒ1°(p, Æ€3*”O€À™gµ¸óF¸ò=éq¡ÔT(]Äf5†Ò?=|l\XXtl%ÊN8y£Yxy¬ F¦8T÷¾• ÂŒQq™Ohe”Œh3ªý’G¾ ¥¨ß—xò…Ú‡j?%%’OÔ*4T”þ«& T0d“ÅB0i¸äeg·éÌdG»x;‚Ùrs‰µHÜêBâ¶ׇ“Ä.‚TOˆª.øÍµ¹…¥ÊW[GlLîå†\‹Fc„¢Œ²ï6YÈ(¬¨ÞV.¼ˆA¨à¹ˆÂ;æïê/~fÞ²Wr­#dO}<¡dÒ@Zõ¬E«Õ¥UAÏ’g=ÈbÐJ‰m$îÝ-)?]z{…KÑÍŒVÑ6eÝeȧÄPÊý"ÏjòI‰Ø¹üBqü„y ÏaÑ`ËÛ‚²°8A!OL~E†¥›\1¾`ØÜNŠ×-æ'ë4`~n“ë—1` Ãõ#AÆÈ+Ci“1¾©XÀV†2`·¥"=’SrOܧÔ7Hr6ºü0¾©J©}l2$zX%á,¼Rr@Š?˜‚ ¡¬v("{!Ø[C™´R0&É“ Ás—Ág!OÐFR‘Z¨ÝÚI3P’›#ýÃVž'?¸8Ñâ“ ýÄgôÝ…?€˜ì¸ÙŒm& »rq©8L7ý&…Û¥aàlP®Q<™|“âKJ¾Jù5[] øBÄ«,Rž{TmŠM&çá#Q]>yióœÊ°U—UØ%xfq”«ËÞ=2i>îI]U†h¿{ipÔ Nþà¯Zà*r¢^ŒOx”±?hRÔ‡"6’cAУ/–[Þ!A}Š;¶TòŒ¦µQ€ö€ü%tŠfˆ!~Da‚4Ñí¥Ó%7•J,tŒ•;Çîï×ׂ€UY—LQ×¢B%@5µ ¤x-ꈽºš˜¯E•@„ë¢^5›>é`Øà”‹¸5*v(07”Cx>£ö @½cÍC^ hc ø”­Ãòª¾ãª±D_]HìÉUãÑ„SÓ«‚¢áC6V'C¥” .ë€2«ÁÖ^3°Øœ´ù` *k"àÝ,î"›`0UÅÁÊAJ÷Ô‰SÚmó´Ðñ©GÜÕ†r¥VƒÊÒ HWk /(•ù9µ ,N›|b©P(Q–È5ƒ5¶šÑùû-à"(e,ÁÔ¯ÊV”¸¬ZÆçÚµCD› ÐÆ(2œl†&‡º `щ¼2“47Iaß_€2&0Ì¢ QK>(Á®…³\Q 5iõ3ª )ªö“Pˆ1”î)zŽ,š=E= 0ÑhË-"U3©ªÝ#ÉEVŠøã’”ãÅÓâà˜ËJ³áï5FîÉŽ ªF~@ fVºž‡]0]uÁœU¥Â2O Д( õ€N‰Ùn/g‘ }Ê@©DTƒ)lä’†ë€mÁR¡¤—Èd“ÍƒŽ‡j3£Áè0So* óH£yÌÿùH8 ©#|ÎT½P8Íà«ÅW¤ý˜a…ñ±äSPÀö%­—D9F{Àk¹]™`äèÄb•V…¶|mè\lf|•’ªâ–Ð*sÛJÈgª†”—|B^˜MâÙ¡ù?<òbœ`$íB‚Ø24:Ô–l¬¾~úJ@ 8_*$Xq2W*WêqÁÁÂj’£ˆæ+¥Šg7Ù-°$¨,¡e‰Å­í«P;,zBìÞÆ=ì`D]„ˆ¨ “¡ ¸ÕüÙÛY XܾŠ\ÈÒ£í5 ‚Ïãca ,¥èNh§~õdi9*)³&î¸1,p!Ct ÇCZR®{Ôõ¶ ‡€2’1ê»"–KOtE3N/L½m ¡Ò¹SÒóšÛ±MI¾’3µ?„)üE›¤Ø:äÐ(_¾ µÉî;T"ãd)¥twÁ ä•¢J ™ØØ¹YàðYmB9ä3±6(!±Z¸fH‘…ž<*Fhô=­ ¦Ñ¥í¤A¤x“ øa}X0úY#TRB:ì°ç“ÝÆÎcƒÈ€ÀÌøì+kºƒdAAÏ쌇B~¿Ùið®ÌvÀŸÅQÚ 3€dK7¹(W- €Á³Á¼ëàpYÈá±cäJ‹ ¦áp‘Û;ùg$O ðÞ.ªü¢”¤2´¤%¹#˜™Ef-uôº-:ÒÓ‘ ]àØ3Žƒvönt–¦%róÃp[!¦KgWy»f4d¤ ‹¡Ì ÷:%=ÒÂ4¬÷3òÙ….d? í¹°SdÍ’dyÐQ›ã†î ¹*’ /œïÌ¢7Z:g·0z 9°­¶‡…פˆ?²¼€æÀÈKs5`úœÂ@Ÿ=¸K*'éÙÛª(IÅèPHC~’¦Šr):œ2Ï\l+Ωh©3²á†^s;ÉDøFBíH^rL"Žþ†õb8P+h—á0à´F®<”wÙ0äá2XVàSFÈ°Èæi‡éá®.S—¢h‚?Ò»ºÚU*»ÈÃ;ؼ–(q"è"Þ€®#±IâbôñSÉÛX!”ÁÐ £IŠJÃ8F–Š|Pùy-ôÀ«ÇÝ_l"™ `Wlü3*`Ïóþ˜÷?˜×anaP¬F]ÒØLwÀ(EŒ'eq{IèõZÄãŽmò‘ÍÃWÚÖÔF]Z[3'XØ¡d›'Zº„Xm B¼`í}8‡ƒê$OE‰|£c‡_³9é‚u?¡måë+Dá¶2×´BùFH¸EUã WäCÅäÿ„ᆃ]¬“*ûv\âÿU‘ÿ †üwÿúß ™Ù%ù½—ï§BAÀ­æùJÌP ¯“ íÞle@D:_É}òsÖD'…Õ…¾‡Tæø;œÛú_Aä&dg-§B ÏÓɬâ…¼>¼ Ã7á%e5/Qï§y§—¹Ìæ¶VhK¼¶A¨­n—2ЈÍåFtY>qà¿&Fµ?e)ÚY9ˆ¼À)UQ‹j³IÁ!øe õ–¡6Æn­öcr@ÒBÚÅv@ bðÇ«²2 !5B%¼y!™¨Ã„ì±k@Œt #§†‡ÐHó)dŽÈV!Âuu Qà„Æ²L¬Úa•þ^©!DÄàŸÚÀAŒ°Rf8.“'©…’¢Ï ¢q¼…žƒÕ f¯ÃD[Î3¨ðƒüB!©®cŒQ…y[iŒDmÖ˜À#wƒººœZVÊÈ+pË~¨a/jhª¬Ù!y8 ŸWB6Ä•kM7°1Æ‘£Â0ŠŠç° †Uk¨Påÿ•|¦…I<2&r'ÍñfBs†h£ 7*°²Üìv©—a´±»‹ÞÛq P¯i&Ô î['xÏI4R›ÊÑ™Ûù¢Ö càG—3.4K=„+,ô¡œp„FhŒ Žc€ÆƒôVË7—0ͪ!ÕÅgH8˜è+ºÉ.Œ“ Ntà Ê•gšO"Ìà“OÃDñó Poü¬uœ,ôR( çZ‹+Xx¼­xæÕ6Õù¬nµIÞ´høêÌKñJ(üN’à>3:S%Ã땬¢Ž¡vº²6„Üñb„À]w—мz—Ø1¬»—GÍ“U2€Q¦‰”d b¾((ÚfÆ:rÈ¡Èpµ7(aïâa&²T󀋺f3 j9ê4\h ¨æÇ¦1eh@ss м àd¹êcwóé 6z·ÈÈ)3J}ØdØsjjå¶â­ 52Î6ŠˆP®ÊÊPbk+EKþ [qw0¨Rc¤xÉ )G®/¨™WU5\ èXB”uk™5Ï]žWb³Ú-(5„C ä bÒC&F —nFR.äI‹Ž”(¨ð!6Û±$,˜Y  †¿#"ÈÅN@àùÉæNZÂC„y/*’ƒ'Ñ a6‚`ר[?6"¶ …½è))؉rà„f´Ú$•‘9-U»ÑHqèñÇI5„ûÚ±#¥úüñ äW¼šÐÕû?¢þJÑûÇ ÿ'}ð!è9¶²ùŸØóÿÄžÿ'öü?±çÿ‰=ìza‹†ï5ÿÄžÿ'ö¼üç?±çÿWÅž×HûVÖ î8²N‡ ~þR-)‚ÈA‚‚E¾€È£Ñ†œ:ûR@24UƒRY ³UÀøƒcÉÔm¨â.fPźƅÇt¤ÉÔ²¨"QæêИÙRÚúÀn9ìwÉ—Wb-¡  Œ”Jœ^j?Dó1æªà)üe‰’ JÎl,€E¡IÏÌ ¿Åq¤ÚÐ`%Û4,.++Ý@Œj ×B^%hvåsÉ_²^U™r8CvÓd6«ÚØ…±¡þâ7A+ˆ¶ *¤œ0‰I›Dv¸£}È•‘gx0ìI9 T /B¹´Jê"øUö®»K08•Í—Ðd.¹ ×Ô£+Ú<(ÂóHØsH•«¾$Ã’b`s€ûˆ§‚ì+%ÌFGn8êaᤰXå~¯6n‚˜oÔ;†‹ ä¢üJ®jJ ~#¬“¼RÁÃ$­ißL…,9E$æ.¡Ò)ôÛ½ÐÊ T ?e¯æ[ÃìQbcá!¶]¨¶MÂ#I;t(à0)‹•ñŠÔöU—2”üüŸ/–Ž.'¹…RÃÖ U~A].¼—¸bu¨·œeLIL&‡ð’Q•†šo:颮JÍŽ!+F¡‹(¸å)d˜&8a¡Ž‘z?Q³P lK¾Br@«€<5ˆäÊ\»¢ÈÉ^ËH&•[ G«¼OšzRVƒºxÒ;ha”VX9SÁØÃ1MOa$±êXR‘‚êÆ΃k8¸PœA( .ž:ªqUµÑ®:»L%,èZ®pqÑå@ÎYÜK‘¹®òõ†¡â«ROóe =&<”ÄEDÊCÝd²8Ȩ¦[bÙdB\T‚üšJ¨Ñ»[JHnI.8Àðœ½ä/²zóÁ<—ÅV£ËãmQ>ÒìƒH³ìj¬yö¢1öÒôd¹M¤¼É›KÆh³ÅShžÊìÂSþ€ÚT=&ÏV{Ùhá±È.–T¤Î;¢ÔV¤zŸJJUO6õGñ©Ô)>¹-ª·øT"·R¶ø‡«ò‘²M(8xÒÌCïÂ7ŽaA‘¨'£`=£uÛ´r¨,_¶@´x@~êð¶·Q rÞ€á=x‹!WDm!å™±SÔÕ©°Ð^`‡¬èsEÝèu_ÇR²ºíäxvTÈΜè¡Xj¡¾Â—z!WŒÜ±§¸ƒ˜F–*Â@Y²ßÚb±kR$“åI!cCé ƒÎC—Ôb‘=ó±­1ßbEb$rugåa!¬E ñ ¹€ƒŠ×É^ÄH QühëÍ=ÎÑ&OJ…ˆE$C$µfŒ‘˜U£•þÒ{Šá ‰Rr†º 1äb¢ù†ñ'${7Ä Þ%aYù¼íôQÓ4- ©dò½YrºKcÅâ3ªò™eX4˜ˆN–Ú£‘w°3xè&Çã?¬Û¤ Cã’(ÄŸp7&ï»÷0h§ƒ¼6Å«Ëñˆå¤Ð|ZR ô4„°ÏC!'¾¤™Ë$¬ÔD”AÊäÈà.ÅéÒ«A•ñÐ5iEÚxlÔõ£ I‘*@ÇœŽ\”L­à¼êðqü3™Lɾ‘GÍY Š}N× AWdd÷ÞR|"ÛÔ*ÕÏ™ÑþЙ#I³vøqh´«§ŸÑ@H¬¨õ¦–6â6G¶`I5vvXøC%0´Hå`#)( ¶}~@1¥¾á½¹ÅÞ?iáãÆI±Ã3¬Vù5ˆy¨ËZš˜dsüàîl ƒ m¨þ'$êžz£2’kî?#Ô ùÿ(ÙÌÑ„h\VÛÐHÏÐ(r=/³YUQ\D~ËÕÓ„Ôº@“V(ƒíAaÒP (ú%–yÂõthT ¬dÓw¹1#ã%q‡'YáŽ8o:eÙÀ¡“Ù©re¦#Œ®Ô×.*N¹ëR×F(ÏMÍñ­à½ šD)+ƒõ 6“!q¦VÃÄ6pr Qý€(.ôšÝŒÿeJ¸€ð8 ½¡‰b¬Ð=†¶'1%x‰ F‹$vÌ¿ú\ô ¢R…æ%¤œiX±îº€õK—Cü0ˆ"‡Ç‡¸z«îM%Ý'™Ñ`5y F.\6%;Dê¥"6?÷ì4I® D<ÓU§¥Øì”_µðÆh€N QÒXI½¹Êêc*%è ù¦Ý°éW¡Çê]CM¥à«ÇüÄAØ3-˜$ + LE×nét&Ò«šWÝa‘KÏÖt-FÚl¥R‘KvêPH™‘Ùÿ±èO¡ê ©ª§¤«ýü‚êë5ìjp=Uõ1«{¦ÈKÒ”~Gr@ÂPfºØà3pzÉpE§cˆtN}5ÿÈõ é*>RL¨é$àk $Q–8IÂ3aÙ èU±ÅlÄýšÏÞ+uŠÓ#Tʲ¨"VÉyT¨dªDI¿FJLd+DLÆP>­æÉït….ð¬äÆpìr¤D”£‚;ÎzÌäT%L)ØYS'{ÄD!7%r;Vsè¾-Û‹1Õ˜\Ô®\ÅMN#½ë°öi”níQ /g3î6p6 fC ~ MǵÆ?Ú¤k[­©E05 Ñ”Oþñò˜Ÿ¼p¨`Sh€n¦ÐQ+~V©j8iÿmDfqy?ÒÂâaà®YmÂk)V#j»ö«¨P¾ìçÕ#Š‹®Â~Z­’½-.õÉì§©¡"¤aÆÿ¬ì)YŠ, ÁPÜ)¨ºÙOËËÊ AóiªÑUøFY7\ECµ:[4Ò¿ëYa”‚Êå›rUT b€m °du1¨J@-ʧÇê)*ÿ.V)\¤z"éVGµÿ]òGá„ÅX‹œ{À(bþ¡CäyüS3¨B<˜œ ãâ…hã‘1Q—« ŽÇoTlEÈ0BlÿâÝ"Æ5œô(šä´NzÉXHh‘‡Q"KÈ‘¡‘!'E¯„Ã2F+ùðV‚±IhÈU£Î'~×»ŠG tïÑâ- ä%ñ¢!Ó‘ÂU’n²¿T±4˜1¡Ò(«K×Ca8R,Þ0Œà/¨ SÕ5‹qi8£%NŠ "NE‰¤RåuSÇs1$Dµfà¼Ö»UàýAãÊ%^ÈB4‹KU}¸’¼‚"ÀpúÈŒ9µ6!ªûŽFt DôÏ~L›As}¦Tœ—*¹Rê0ü›±,Bý@4ûùßÃE*|ž(¼ ÄŠÅ—ÉH¿I G§JÐaØPH@±ç4¡5„ïUFÎÒ Š!$ ìBHCœ<ö1¶4ámÄyg`£`©ó‘: ù}í-S²ú-d8û®_eR˜ø â4úP›×Ãú¸®UϸAJ§zAµ«L¹S¬ÑÕ2ÇJtÑ4¨é¤È«¬ñQz5Äà^\nq2'MR+Vúwq{-ÈÏÿ!ޝÛk­ªaïôsÈÞ±]œòwzEG’c´Ð>šß`à£4üÝU1T"O%‚Aˆ,•¯„ÆA f•Ôœ=.”´œ}ÒQ÷ Ò€«ÊVE`Ñ䣖É~ñ(0¹‚•§  DÉ©P/®ÏG*© (jV« çŠ|OÓ§ZŵÞN£ ÕßGeG{Fêí$“múÚÊl|µp ì Ö0#¬éb ŒRe‹ÌÊz #’N âsÈh\Q’ÄFCˆÛ)ì¦U€Hø]Õ51‡iÚ…Ý™QX‚!Ý`dŸAzøƒ«6m&R¨è﯂«Í¸‚ lAŽ7b2¦¥øer÷—É6 rÜ CÃ|2 lyrôP5U9á¬ÍWZ¤$ô÷¿}Òå†V=íW¼c«¶Î 6l :eÓ¦È,ž«.ñ+÷?^âß„¿`øô n×"+ï¶a ;?Säa‡8à’zO J 0´JÿÇG1«¤Âð’J} {”aqÀe¦B1›‚ Ú r*â‚l¿¸ ‰€Ù@Â*ïP6tžc$#£–ÅÜèú „© lŠj!*UØ&KpU‰„³Úò}hYÅ–8ã)Ñ6gÕ€-AÌZ ЃÁ°è¹³¦&ß ÷Ï@M˜B.QÓN¿@øà…à nÀ`ŽUð™°Z–6TÁ™Qöž»b±UÝlý¯ŒrA'D у® W¨›P( $ï!žTEM€j*tSi=x§Ê EdTÑ“%@{äjiÜ„¼ê¬Þ´£Ú}iÊᦗô˜%Æ1³˜¶¨†¯Q@þ‹äÁCàÔBÛK†š`p<Àu ”½Ð^`ažhš2Ø+û‡?=*{ˆp ”M  :–¬cáw¡ðiÖú3ƒš@ìý´É´P~ŠDžAWº*ÔšÒõ¥ƒæìÆBý¤Éø T¨%ÆÐ‘£av( le1 W‹ &Ù©®s9,Y¾C±6¥åx7Î’ÇÀ ˆì*LDòÖÃ/_Ê…XL'§ëÅßE^sBÎC1òë!½XXÐ\—l e=ù扑ã…BÌD1¥†7Pâ1TdsÒ =‘ 1¢›‘šäh>tÅ ö º—’K+õï„Ø>6zM-e@˲ƒ$„†]Ø.èùb I³ ŽÖøF³hI|ìƒÅÖ·}¤@¥¬H.?c»ðÿm¥2éô¿Hüÿ—¶8ZêKf`ãòÞbM½3rù¬øh$’PÑR’9¥S˜+j—wµzG’[Y‚F•¶E ÏqÙÅPÎ)ï©Ä5U4Žÿ.…ãÿˆ²ñjÕ…l+f±2¯ÞöKˆ®K òÓúé5&yÈuˆþnvXȱS\j±b¼uÓ=¶ÃK×7:¶I›Ú2!)MÊ 0.´f«3– ±¨yõ¸ãbAÛOã½ÛÈ6<·×B¢: £ÈÍ=ð!§ˆÅ4'·r¦ud«‚Å(g6 Z´8žéA¬äJ@áҀ‰˜n¸-w²yÅXV ºë=›bÎRXýCXPd±kuTÑ­^V˜‡ì&+Å`™©u^âYj–"¿ŸU3:8Üš11ŠõClh”ÿmNâÈaäk¹ÛRê‘ùQ"r˜]äQHÃgB*ƒâi såð•8CE³ôhvi`§ ^¨¼,1©.z:YA;ŽE±ÝmHrvób0âT™WëÁ¡œä^g f70KCå°*vîD³MˆË>¨}B `¥lØ&‹`s–ÙÝ. W‚(+¹Ê£ ‰ÜÒUª R3Ťµw‘‡®S§dcb[Œ«+Ç §b%ÓIŽ—¦»CCtüÒÕ#.{U˜nêTLÎ3útÌÉ‘ï›1’Å<Òl‘œ¶rJ Ô1Ý`¤ _\;,ž:‚¾CHÂýƒ‘–qàüU¡æa¬Qˆôwöºg F:u•zcë î½a°-D”ÆXé%ˆ‚@ 3®éFôŸõX¶^²ÓIfGe~×v€/($WbƒÐìz®ãàÚ áÜ+œê´Åd:U-&_b WÚh¦‘£–•p•rç‘2µm…vЯÜlx:Ù;[34N›#SÄÇ„Iê kCS·¾hˆöˆè4S¥,5æK Ò¬™iÅŽ4wZÿò4h Ù´q¸>‡CÔÆ»0²&J¼eŸx”TõèÙ7£w'„_F)J¬Ö!h‚íSŒ<*Ãi&†#;]()ÅïþhÞð”mßJ¿fÊJUäëY)\‡b¤4&ƒô…?9ÉÊÊ€£Ó•W}Fj´#æãÞJlä¤{Ðé6e*ÀÝt+0²Ö2]®\ ×3°çbV(Fû“Leø@Å0qB2ÇΑN [h–r)CDZP’ª™’—”jy[Ø×´ð±aìÏÊRÊaÅê„»š),˜Ý€¬ÝçsÉ׬º½AN¤[&94GsÙŸ•«Â…– “áÚj0Åå,·ß.#“0ß@ÕýR›=ÊaÝ`;’–¼jKѪô#Þ¾G{ݹ×UŠÉÁ PM­Ééži¢øŒ,¢µXŽ÷¤Ó@u½àÎf 6d±·Ü@Ø-Èò°Í(P¶!ó æ4Ë6&‹&Væj*•LÊîn”p£WŒ(1ü 숟šQ|¤Xà¼=4(¢ÄÃ4²PO7¸y2üóª8r/†FöÌèÛ9=ÖçqÃM“¹ †ÃKŽŒË|uØ]8w…Èã„mAÏyZo’ª|$9ãYé±ä‰PoAq‰ËJÞâÍþ†w#=¾!%>)›èvöá£ò’¦PÂw„a2²dw o ùs!<‰bJ1w™µ…»ÜÄ9wÉ?©ü de ¸’à3yaŒXíð'¾aC'¤–ßßRFUCä³¾ñgüDà”Oô?y”·PLÉHØaiÍø7¾Aüà-èÆøKø›¼3f¡}?¼õ°d¥4&²€?ÅHöŽ?—þ¦‘’P}_@«É’âßðÎë­à¯ÈŸä ¹+;„ÜðHÞúÄfûX»Qm"ªš è:Æò`1 ÑŠÂ$l7®c½`õS…ös;OWGÅj”‡þˆÃ´ÂÑR86ý HÿRè–?²ß^ý£þ©S|#¾Dò¿€ªØŒ†ð‰S { tÂh $ öPý=<à¯RšŠÏ*} ]¡óGÿ„Y£‘ÉÂ?`~ðŸ\˜Ž‘,HÚÇP»“Zðººf_ãúrFÈRt3AÜCQz  0, \¾ dR¼¡Yx=ŽàïGøäCä@p[ hñÈ`[Rƒì¡}âqŸÄ}“FôMŶ@ÖÅ—z$S©.™\Ee¸(ƒŽr& ªÖ·o´‰ï¹  ÍÉÙ£æS©yI(ç"D;.Íà+«Z)ƒ'”êõ—"i)·x˜¼˜ÇÉ‘çC¶“n±Tòƒšd‰s© \vu×WfFå¡B_³øSU» \ÇÅ¥¤`‹r¹‚ÃUT•žˆjßA؃ò&Úä™$MSß#GöA%°¦@úïq9Ê@ÏL…,¬+[sR8cOüeh¹¹Îׄ…EÇV2uµßnÅùukùU­Ð¾ê C Ì![žU­ÍYöÂB&€Óe‹ˆ SD‰)ÂÚ>¢sZDnZDhS‹§äKF%[:+óÂår˜/F7¤Èž2Fœ›IÐ0ˆì‰lapP·]£HN”ßa %¤ÝÝUŽÚY˜Qp¼u¡xœ¸„“FU‡åµ@ËòÉÛbîã¨Ðª:¥APv¹|È“ §—µHS¶ìŽƒy0=*JÐÖV©Øì§¾tùTÒm•ÄÈ’²vcˆ°d<Ü6%QÕÞ¤H~n¤hµ9,¨Y‹p3«Èò¬QFÙàöTƒ1uªj§ jiX»õéF;UI©ÝÕl”“ŽÔÚß>ÈÎðY çàfÝCW=‹!™!–|à2+óÀ/$wZêqs»½@Ò÷‚@ͯÀµÊÚ¶ =[%~€(­fÁ^åaþå, ·zªä <±‚‚9BOº¹«×6HRˆ°¿òú“2J[Ôºõ+F5mZ@Mš’<š^BPþ ©¯Y¯@à(„fQuK¨/˜Î0-¿Ø/lª!Š-‚æyk`Vä j«6[ ¡|°dL‰‹Ó˜4\0[9I}n 3:úûVdrܸt8·¢ª÷ÇÂõÏ]€#— (”ú2:¶«Á+Q^{%-Äû¡2r„1Õš<(=]¯I]š)ÂYó]“~£®¡Ì„à ¥XÎ"ðàò†4«›1 U,•ˆ‘6ÐqH6ô]ÁÜÕjb“wWÛ‘] OÄ— áˆåêåš«^-A,—|mÓŒ¬ašu׈b)*sî §¦1¤…x( BþYX©¢Vœ3h ^Šª)’F{¡l!Va‰ãÅϦÅ` Ü»œ_áë±ÜõC‘Š6ë!~_Dz?#b£+cYZ*ÔïjZ¾»\Cð²62AKÂÅpÖ~wг°Tr ©y(lÁ[G\¸þ’5?W¥J­Gn~¶§êýb>‰/„ØQa,0•‡²þ4RË, mò§ LSGØ$TúY¹µ³–¶Áí= Ø_M®j„Q°ÜñßÎrVÛs®ÍoäÏ3uÊ«õ–×Ñê§lÆóNkFÊ|åÑ}Ï«:A­¯cÍM |Áz É•^™æ™Œ?•Ìæê:,CRkǯ¨éèfzJ7D¦¢ñI»Sa#¢e˜\’Û%™@ØàjÕ²eœR¨È ({„å²¹·Ñ³¤.—Éj³ym8 2­ˆñ¬Y!Öq"¬… Ç£”¯#mn§™ü‘¦r‡799Rl$ ‘$äöËeðco}òN/»…(]óVyúEºÇD^)2W'8{0–÷ßjÆ=þ͆aâbTüõ—¥²Ì‚[•ž"µ‰ pàõŽ]åKÙùlŠÍŒçÉ„Cú’cœgœ—<8,¤Êó(wFÍÂÔ¤ L ò9`b‰–öœ§ô°œ€¤…PZi`ZæÀ“…ÝuÁì’¼óˆï<^+:ÂÅÙJŠÂX÷J.‚ÀEš3nÀ¸,¹ ´…¢} ÊU×¾Pʉ‘@Y`Ãf/ ’”'ý’ Ÿ-Ìÿ$‚^ÒÂøPp9þÍQlÏñ—Ç-Žrˆ©Ší#A¶á‘ãÛ ßÓ¤Šm65T6‰¾ª(;àCÏÚAÛét!b9ÞÁÉßÔrW~”ÇŽ¾á5´*Î_ÃD3U{¯Ã½rƒˆöÔ ‚j#ÓãBdÆ4Óå4ÂyPl#ó&Y¸û¥Ð>&FU·Ry©n+{¯6¼p¤9ƒµ·`º|6fºÚ\§ —µý镊Üa”uتnè™;xTiUÔKîU%×v]'ã)uc"òQôédõqP;¦ëTë\pLË‘E†þjv¡¾ªj£UÅp½·X¼P8< ~öÔ¸9€n[Eš¸ìdÕ0ÙüìTI$„+–"é›[QÈøŠ.lòÞddOM/ñ¦€_ 虋c™†YÔ>_þâo`)©Ñ„ŽŸŸjžÿÀ¡Ç7U¼lAÆ…Šbpt>ø8¨ÀÝMØ9Ò.„6õ_¡»*…Á`èTU$~f\ ß(ç"5o.…‹{Sºl†Ž™WÃòúïTéþˆB`*ÍfBl2ºt6$]9'.8¿„‹c@F¬­Ür½0pæ¬v½¹Px£rqY´qV®Ñ_[#»Ó)¾C:Ç ×òôZ°0ÿ][_ñ`ð[}ºa„Ím~Øh°¹Ý]Žß>@“E†] "o–ʼ'«wï½uò%ˆùdÇ«\?+~ƒÍMvqãT¨0‹‚{›¨É­ÖÁ9 .lv§_)f³9زJ¹Þr¢Q±Ÿ‡\D˜N¤i'5·‰â1qt¤¥›Þë&—tÝèÏšÐÑ =Z\þ@c.¸(ûlÊ`øff*MÏR¶òåN¤Q€zá5ûSÝ¿ {;•ì ’æ×54„ 4)’oA-¢„|ð[R;¸qa +UvmÔ Ü9ãέ…-72ˆl0jã®’¿_1\G„t•Ñ®P‡p9`ZÖiíØ¨ÊX0öÐe® `ð 68ˆäÁ'G}±Àò“®RÐjM•IFLÑ‹: :KÂUÇG¸°ñ¸ª˜þ_‘e‡©>¸ýIjLm¡MÃàÆaÅ!ô"³VG @'J–-Ј–ü¨Ú¬˜£«¯¼¸'ËS"ôSɨݤ›0q´ÉŒÃÑ&1²rùÖë_„vsVã²é‰²µìÏ \ !ëKË«¢fÿú„D$CäÒDH†fœ¬c*ÆÏ|«GÕ›0r0â‘M;þ*;€3Ô™î {‹ÖKY3Õ( ª¢ÒŽ¿Èv]ÑÈk ®rÔÅZ„ñæÔöð:€l!¢BK¡[Á%€BIh²Q‚5 Úǘʙ¡µØ#».Ç%³,>ŒŽk£yy"#™gc”fØÕ¬o°c^ýf”&j@Õ‹â¿l=¿4.®ší‰-‹8Í¢Èb;¬fRÔ—Žk9)¼Â*'A³ï+“ÀÞWµ³Ž1¨oM¿]ˆt—™ÆBµ8/ÄÄêefæj·JL²¸ÇzmNôæ†í¾©· )d<0¹,Wäëè‰Ï,H‰toº jVÙ<@ý`L#rXáæ­¡ê¤=vþÌ+R Òߘ'åZ¤?SUœÝæªG‘CåTdW{y÷ “äšÊã§÷ÒR€pp;=d{i“ZDb¤h„¼)²ʬpz-£™jÇi³¸ET ¯k¤Í)ÝiŒ‚‹W)Eµ©ºÉDÐeÊÓÐKt 66&¶R·*6«Ž »JÃöé¬iÕ„‹»ÅIZ Wèäéµqw/6ˆ²Åd fÍöD®ÍoÉʬÜ5݈heUnC*.RÃ[Vß“šÐ¼~NÝ-r¢ÞÝsÌ’‘_yüqýaHªŒr¥ØRùê\%´Vþííî1Ù‚$—rŸÓª ’\ÃëÇ5¸{¨qÒÔhäL9Ur:iÔOíïš(‚Ìàäž oÎT_2¨®<.PIAi^CO„ÚÁ¤htÆ’šÍfƒl¤ _¹£ QñyhR«Kò¸‚‡ê·…£TR\E3¨7U  ÐÓ †É/H™*l~D]°[¾@ÑÒûÇ:㥠-Ù• D”ÌXú×v¥@ä,[h¨€lÊjDZ ,++ÿâåÇjà»þ›+ZèA<² J‘ÜS3,é”lG0©Øâ¶b¼ ëF*p !ƒ[ ˆtxn¡ÌD ¥ÿRÉ» ‘ôj1`-h|šIh/«©5¢OxI®=dq)§lAÀÇÌ ÓŠd‰ê25&j´c\­o‚éQÌ:8&ѧ·$“ªYQüf Á+&÷ÒÒdµ›.m‘*ez2ûBv$ÿ·d-æ?aRŸb±I^¥™$u„ÎsD?¤s¸ô¡ý¸îa˜Ÿ`–lµ°©K¦,è’);ðÐ0ð¿RJY<®AÝoY}÷[VÕýøøªfÌŠ¶^pæ{Aé¡7*þ‰… ¤òhê &ÏjPM¯¾íññz—yÐWÐ(ô¹Ñ§OÜg<@än€‚àtÀŽSÅŠPAðûQ;˜3÷êåì+µZdƒ ŸÝ+.ÎŽ\éñç¿?µ­{lÞ<Ÿ~à jSͬ ¨U¤¿T„ã ‘ ")¨TàÜCr3À+ìcq ¯Ô›‰H¢”CCÓȤ—¶¢¿ ëUPbEh/ˆŸÂäVªP*hqÇ¿ ÷_擨ƒÏî`[hèÊl*Ô+˜¹¹Â‡«õtÕ‹ctS •¼;VAôcö–É/Lô‡ ‹F#*eÒ X\xYùÏÉ>i‚Ë@^%G%~֦ǎ•Éš>¶¬¨ÒDþuêçÔ°!•é|_`ÑìF9ˆAÖ83aÞtmœ(î+TÉFÅÏJe)]ãã„_ ¢×º6„Šª§ ÎÌ›ò%½1 Hµ1ð°Az%ð(´Af8‘ …-ƒ2°ì®CáïpM Ë­ºpLPuOB ¯j¤õîÈÔÄŸ6XŸ¦0LsCçðî*ü%} ðUá!X-^Ù°¶wvëO;g)´y+Ì´®~N¸<Œ¤¢¹.P<ÁnÛîá;…'ü)ˆ»í¡rgô$f~ûœ· Qced›5ùX=&IŒc#Ñ…Àè_ñ^ŒÏ§w ªú®";소&ìŠY*Eûj4Ò@Rž^3Lb R wëCš9¤Ê~¨†O;nFfª°å*™$Èo*Ë¡r¦>\èVÛqà<ìž‹ô–íðyŠUç©£ÐÈ,UQ#ùNØú‚ÖÇ߇¿T/¢´æú¬Jæ%<õã'ƒ´ÄB˜8½„l]n{b¾S^:>..B ¢0ïá|;¹›8]¾¢b H…hA™LTL…ƒX9žyëL&<“ô÷Ó@Ô˜¸ª¢­Ý¡ľv¾ºÚÍ‹ù¯*=~Û-3÷¯Úip Ë*ÔT"hÁ'˜ ”#‰ð"ž‘󵹪 }âÂ<âtÎyJ¤*&…2ÛWà&ЫԿÕ9—ª·€H•³J@š«ª£WßOM`D…úˆšD€G.½J:å¡4ÀO_Qà’OÝp•dWŒú•ó>ØFø…€Êú;µZ„ªïôj…Ù€å¤äoܾ´×¨ î^ÌN%iôy¼Ò<Ë…:èZ¬·îIn›§Ôåô°ØfÔt‰B90±ž‡~òƒ^ôÓ(r×P’¹#^Va[© ¤àõ·%)Õ/TXG~A™oH\pW-^ùÙ|’K®£"”[²yÝ>n–Æìuã%)’JG³´ÒQ6 .„(ˆ ÀÇ64…¦¬p!¬ y”M£F¡º°—¢UTcŒMH3õQ/²j¼Šnñ€`Ão¬äš'Lë3ò’óI[Fr­*%B¸àרæ•bþާ§Ã ˆÖ§ë5és*~“ ‰ÈoªI¾ #XH”÷Ê8™ì}N+Û‹×{øÎ|Q¨ràŠˆàHͧö¹"d(ºST"òÀ”9’ÏrƤ%‡0±‘—Œ e£ÀÏ”]¢á— ˜ŠÏë’ñ«èñVät•ØLtK3Μô˜kìˆ_€6BĘÂÜ7\8Õˆ€Í£Ö!ò "ƒ¥‚Ë å¦ÐÈJ1`hñ_b°·ù ޹ÏzTë…t¤l•ÿ-Ί`e Ø.‰³Ìc@Eý]ÏÅÐj}Wè•ècîpZ…Y•Ýå‚"GÌ\@º¯ùèrâBDSÔÌåá&ÒVb²#ÿdò0ã$ŽÞÃ?© ÃUj nI=%ÉÈSøÖþ)—ÚPÛ¤¶Ã˜ ÇÆ —²’¶2ÈÏDWA±ÍQŲ²ËM$I(´KCA_‚¬*ÀåYÛ)¯RQ–ivŸSÂÎP„¹¤º oA¥Ý:›ܪ»©™6¬qXÈ‘GY8R9A&Ò¤Ûå ã̳0MRÔò\@Ëlܵ\Á¢V’²t<Ž& °¥L nû©*6¡˜S"B%Á¸S 'ðÍdIÂ(«àµtpzͼ6Ù#Grp /Z¨››eΣä˜CPm»“#}ꎶś¦ôX¶“"e‚Œ’D ™ur$Ttr%%F*¤eæéäý™é_%Š…a³£¿´êÒ+œþÊøÙd% G¹ÒŽd¹0’Š_ ÷ æro?cÀ€ò!³”5šR&Ç07‡²uó÷´àtà jo£#¨=ÒéH“΂8Á¨€! :Š–Rhõ,í†á.óÀCŒŒ+µK±+1QÃ¤Ž®’RŒAÓ˜Ù´~i©¥ *LV»g$ŸÑ¿#ω&Ïvˆ§Š³‰X BN%æ&‹Ä XWØ!zz¼™¤… 1mØuh6ô‚ö­:qO&ƒ©£Þœ°ËŽÝ%. sŽÞ Šp«! ÂEœ@U<ÏêÂy†I,¸ÈÈÕÌjsнúü5,;€ÉàFy %¹sœ„¡‰e| UbʆxWr•˜:‘[Õà|–4ÃFΫ z¥xóçòG™®õIl4YJø†ñèÿ–+%ýeqDiÅ•á„#Í%”έsBüˆ ×BµT5^‡B¨ÆÕGõƒ)×Pý8ªPW7‚´6eð”(žtÐØCqXI®@B^¥ëÔËH™…ÝÏ¡JcÒÈÉ#bp2Ôè);.jEÉŒ‰"l5~¶ 8±«+IØÃ˜œ‰všÅÈe# Í]‰±¥`jÌCÕ3‹ Œ£Òž”¡`Mc-’"=6ˆÿDŽÐ¨+0ýªŽ¿δ Î@<²µG`«ÔŽ× Þ’T×\9¯ÜѨ¤¿Rk$M|IÛ<ý‚ÜPŒ6Iõ–ø‰9å²Ë]io~÷rh~ɱ§¹³Y*Ÿ€„ ÇÀáBü%X@p£™ŠâΉñ_e%”å§ü ˜%¶Gé·þ¹é¼DƒÊ,Qíò·—P^ê„ñ6@}°ØÓ5_¬Zëh O@¦¦*®@¿|yR3 S‹² ’ <+^ÓDº†¼*‹ä˜tX¥!ÚhFÛú(ù„a›†u´zÛ&éO¦Æà#…Ž,T€+—Lƒ°&á½mÖôxÉD¾‘Ñ/³9¤$¸´˜_è5Q}º¤ ‚N\Ѫãˆ^IØN¿nëGïüï Mú¿1œhŸS8Ñl=é¿N4ÍIÝ¡ð÷8É!ãŒÑFÂ8,n” [œÂw“°¥M{ Y]V“¾9 ʬԩ|p¡ V,²'ÓDب”ÙŸB)H¬<ª}ãÕhBêÜ „°Ï¤.A‡¦i(\±ÄHzJ ʺ%(ùU×u)›»ÐöPAkZãÿÓ:^Š3'JhRÐÉg·ÚÌ¥ÖÂkSGùIJJÂßäGý;)19..¥F||B|bJBrbJJ¸ø¤–ñq5¤¸kS}Õ?> Iª.U¥«îûÿÒŸˆž™Ù¦xsRhĦ—6­Ý´zÓs¡ ä tåmÓ&¶›ÍYDNþDò¦w,á<Ⱦ› ß™60ŒhÛ6ÔãÐÓÐÑs}½>Q3þ浿·ðš‡}¼tXܤsuö¹÷…‡wôù-õ刷ºÔâ눨—m–þ»·,˜6ñ™C;çÖ¸."´ã²[Çô(»ÿB6ûß{ôXëõÏŽJl¶·Iïg-c'èÒþ¦}g ²£ç}¾åÔö™Óã§f{ü¸ýÝ”O ^ïm³ü§_úDÎx"áç‰qG{f¹ºÎ¸ý®—7LH4¯*¼óƒn'$.ÿcØáý©?öîžoˆjšüù¶ú‡ÿõáCnØ“74þ{_›]‘;V—ýpw퉋¦.°²û¶OFÅ Û’ºëƒÅ“7›îî>¦²ßöÞj÷zò›‹t‰x6«îð_û:}nûS?è;}O‡b›U{駯åÌ[t[»OêvZùnë1³Úµù½?ç¾|±ŽÍ)±ÙbØ­dRZI±mö¢b¯ß2QŠí@næ=mn&”Ü7R%>aÉ0a¡:3ÛÑåp¹ûÀ­bˆ›Þ*6kl&¶õîÔAJHNm“?áÉøÃVl|œê§cV\üÎÊNh•Ø*!5+Aýœšš•×RIŸ™š’‚ä߬ÔVãR•ï-“húTø—|OÍjÏIÉ­â²;¤¤dÇÑüÙì'}ÏŠËHÎ’•Ÿù²ãÔåª~H:¤Æg¥$’¿ ¹$aý”ÍÚ›Ùªe©'%5;ë‹ÏÊHNIáù’ãR3 žŒÔ¬–ø=)1£Ù¸R³“à99;µUf2´7).3IhoGÚÞ–ÉZAù©q™ð;3þÎÇ‹”ÏÓ'±ñO‰OHÌÆvŽ"EIÇ¿gÆ%Äãû$š/Žõ¬CǬ q;ÄÑrx»:²ï¤Gñ0Žrù4|!6Nl|3Ɉ`»ÙsbG:>qñ™ð;;.Ç­ïSb\¢Š~T“Ôú‘˜AƯ# ì7™¢¤löÌÞ%§ÐwɤUIXºLò_¼òÊJNRÒËÿ±²1mKÛ †Œ–ÅïÙ´LÚdVVB«¤x·ø@´ÅLJ¤kUuZñÉ1!ð÷dR^ |‡ñL…ñMè€ô—Åæ-1 ž“³èo’F52}ûÃg‹µ‰Êq)¤-ɘµú•@Æ®e }ײ%’d\J²ò=)™ÑWRVü-ì ˜XErÿø>Àû+÷ŸíœîãZ²rµ?zï ¨ 2D¸eÅegòe¿[ÆeeŠë$5ŽŽ#_')컼Ryè{¿ù蘇ó‘DgvÌØ&øÉî igëWGe}ü;Ú† Í…(ƒ-ñù]sê,Ù}ó¤ik'}{_¯ƒ=çÜÿܬÇÔ1±gZ±6ÿæñù/®^õ׿±û íß(¹géÙ½ÇíØ5ª´äì™7œµªé[¾ùnçÀ¢­ßm}-mÓ»oÖiÑ&ìü‹Ï˜°níþ]7F^¾ö©·­‹òKÂ¥âç¥Nµ=ún“5g¾Xv䑯FÙ –VÜùØž Ëj>·þü!sý'O~xƒó—mwÿ9æÂ£åSš×þÆ>êØä–‡Gm=stü÷×]jñöuã>X×taJÚûñ7ÆZs¢gÚ#¥ùÅ¿LëõZÍû½×ÑcZ²së›sË7—¯lóm×§>¯ìPš4iÓŠ‘gßÙçl2fþÞ²ØÃW/º¿÷üý¾^ðÕƒ“'~÷òSaÓ7O{;iý÷”Ü™Ö嶬?zSWÏÃ\‹'waĉ䰥o½zÚÚöpöÔšs'Ôšºö|Ï= lÖóí¸Ç·»î&çljO<ö^ë÷%z56ôÁå3Òÿkïà#‡;tøz×”®ëæüyÈ´>{Èk£Ö¶{L½uWHA«—65Ÿ2µxXÈÜßæÔÊøär×¾kz½²8½¦uòÀöC.×ÌîÑyi·vi3[¶üƨ˜;ržÍö»Xîüñã~ýþúžM'?ŸµúðÉG_¹û‘s{þòý¾Ë]Úf¶9à“ñ_L.1ÊP·ííSŽm›vªý’¬ͧ>ÝáËqµ^x|ùäÊm£¿0a nœ>£mb¿…ºß6¬fEó? rÊFçß^w•uÍí¯u©3qèÎg:7+¾Ü>Ûýà´Å®'+ë<Õªt㢨µö5ÏW¼²µæW#g}‘›uâÑä&Ÿ—5ÿìÁì3ïux³F G·»ÞÝi¨ÕýEûâ7¶5|ócoZF£Ý¹…ko–ju›}ü¥y—B¶ý5lXïÖ6øÒf÷Ÿ÷c÷‡Îm=cCÁ¦ˆ~ß=y_άyzf³§b×ô¹Ø¯[³»{=³«4'Ût}Ïž*éÒ`Ï‹#L™Ï-›õË[ šv9×lîæëºHë<;æ<6¹î—“ûñã´ZË ìK[‘ÑíõÁÿ9Øô˜iÖꯇ¨ucÉ7oÕÿôÅK7ÙmI¯oÖïìÄq¾lm:øøüÛ/?²zîâgÇÝÛpçóŸ5èµîÙ{»ÚvÿGËóf\ßuCîÍ[©CÌ)Œn“”«b+¿ky|\÷òïæåºŽ¶¬7éüŸw¦5\Öú¥Ì^×ß|Ýäú O•ì«‘~GúÅ:‘ïÔ~gæåß÷ž¾1sîÌ'ΚŽo¾ë۽˚ô~i~£[ÛuhݯòàÃzeµöìÁo»íJ±§ßsãßÿËk¿\»ùÇÓ—¶]Þ¹ùÍqÏ~â¯û~{ù·ßµØÕ¿ÇúËk~µ{숾g‡ÏhüÓðÛO*Y9`Þ_‹N>ùKÍ?ÿøì¯ý·Œ;}pýÅçzÿTóÀ-íÒ+ß­·wÜMµ{Ɔn¯7sgÃöýÝ|ýÔ3¿äµû~Àþ‰KÎ=üЊ—^õ¹.¿bÍN ýzÙƒS,#gOÚ?§ó€Úßu>òÉí]µëdéñw]-û–Ø÷ húkóó³¸üÜ®KuöuYµ¿^‡Ú.w³F«ÏøÂæï½iÿ‚ΗÍõF>Øv{ä§>;Uë§Ò§»N>_ë;ºß{föÒ˜ïZ»Û^žv¬ÁÈ—úâ¶Ó[×>:¾éu]Í úÍØõDQ×;:ùÃC—jl;4î¡Wθ9ÿ¯WWüѵã6Ïç¼þÑÉ™–ÝäÀîIÃÂ÷ìŸôRËÇîØðFÏGßï³oÚ»óNÿ>êéÎÍÿx±áéWk¥<ÛºÓÎɦ{F>Ø|Ine÷ú3'}½×ú`‹1ƒlõoüÖ¼¸hùœíÒ„C£¿êw`I÷‚K‹R6Þ›Þ¸“÷ ýÃö¿·dU%õœ—óú‰ûþäKËâæ¹?6ü×C¯ž¹Ø>·õÉû}ù^¯•߬šûàýÖv®›V<èÜ3» {c`ýôSó&˜¾oóÌ3}N­{t挟ÊlðóÊîá/\7¯ó’¹÷¿^Ð|bLÝ~ úÌ/Í»ToÜÈ´Cçüøî¤7V7þlÖètïÊä™}wÚ³òR½3£ž^i›g®™Ô²|Ï;K_žß êÅf;¿míìÙg|¿M]1Ì4zêÄ_ߘS\ïGém÷Kç}oµžñ}Ó‚Ÿ;¥,ô/ïÒȵ­Ùðñ¬™®CkÂJ ¾šÝ È“ÞsUÿ¾Z¶»÷q§aKÚ°IûŸÚû{ÿÇsç_ÜÞ§[è‡ÍŸúnÇÃV.Þù¯éßf-ˆ8øó_œ~ÃuySME›z'G}ztÖÔGO\÷æái*æ?7¯ðºü'Û_ús]ÿ£+¿{¥n“åOw¾«éêa¯,ìÔ¡­òVº¦7=·úð§^?dP§÷N'­—Z6½´mõ´¢K…éIÖ’±©Z|eÞv°|ëå”V=8̘úй’§7~úÍÃOÔ˜wô†Å-îÝq[q“ÛßßhûÙ_÷ßvyü¤ß×2¶zà%{YMÓüîº85¬gé¤w´èRnaÔõoÛ÷¯Üûè‚_Èl¼ÑÿËZÇú$瞈,iøê‘µ¦Æ~[Ôkþv©Ö…¡­S]ßü꟭º¥Qßöyy{Õh{dÈ ­gŽ]zçâŒÇnü-âõuM†ç…\H ïwº¹±ó´ZO|iuž:ðÀ )ÇÇví˜ÜýÐK%3O_ÕyZ·Ü·ÏÝ>ë½eómc:ýý“³ó¿:sðÑŽï ¬˪œÏ‘!Ëþ«æ]K[ø†¿Ó¢VÓeËž»x]È-§jºoý­ÆWVßß½í¡-£ŒïîÛ,ÍôÒ¨Š†;꯾núÓyíCnŸ÷ý≎z¨áÐÖé/¦|¼åøíÑÏ7Ú¦¸bG›Ý^_ðÃo¯Oê¼4Ã<32jÿ-×?Úðç•‘Ý‹ëuµdî ;^7¯zÙÓaùœ]k/E69šg÷„çn)˜ýóêìŒ <ä»+ј{ë¾'îÝòíªíÝ[ÔôDìNùÎꔊˆ™›â×|·ÐÍ£o Ë]ób£ßÌ»Û7rú§‡ß*]yãæo‹~€Eÿ ™õ·G¬xuÊs·¾÷ð²/v½³9鑇_?úÖ½o_}zsjô¹’Ó§5Y´àÃîe GÕÿäôìƒÇ&.NÜôÀà©c‡ïÙòXˆ!÷®®KçM&}ˆÝc<Ûíñéç264Žù¶è@F‡‡¦¶Þ2#±á”9ïå:ùnmþÑåÝýÎlÞd]Ôÿá ~;ñõm¥™ç>ïxèók/5>ðpæ¯ãntÓoŽåýräÂÒIƵ̚¿¤~Ã…%Y%¾þÂo÷Ûr¾xý_ý×¼0 ùÙÎ÷?÷Ì­7¾u dæn÷­I;jm[òÆÄÓ¥„L’òoýfúoæý߆žùó¥.7¾’TïðÊã[w¹·ÍɆíü¾`Ä[cÆíoŸ[§iÕ÷Ü»{Ê‹{Ìy$rìý{Ï-ûîîQ‘ N¿ðf׺}3eÔwO-ù)·û3Ž”ýzðÝE]/¹¿¼¸8dùÓ¥EoVÚ— Ÿ9·á¨™%¿¿tâå˜'ëß=aôÜÆîïß5gË gëÙÛývü››¿>ºæ±Ä•†ãùŸv·;¢_ž?¦½8è–ç~¸éd\ëòö=\¸¶Î NÖ˱ Éù¿¿½²ö† =W6i3¨^ÓÔæ_™s?ï×sLnl‹o[oòÕžW/=¾tàmºcÍ„¯×¬Û5y]_çœzOì sMMþc«Ç^>vGË6sv´þ«ô“½¶•;6» ]+RjÜýÙYÏÌ\¸yö©QçŸéûõx{ç¥C>ùýþ¿<0ñòbcg²YÖ~Û]¸î^“Ó:ê•»óŬmçó^Ú2ï¡Ô„_óšÔ›:ñ¾wlÊ›µªsÝÜ}«GǾÚàlÅÓ}Žwõ w†Ýç~zܬÑ/MþÜ™öÊÃÓOœJþxæ c>nr{)Ù>¬ Àgž—·Yû«Mëo(šëÜå}~Ó›µ~lr!í×aç¶×~¬â»‹‡žë{øl~ú˜·êE•üÞ|æ3ž{o¹{Kã·£]¹ÜÑëü¢Ãçj}1Æþçå¦Çïé=ó»&|YçÂØ’øWgÞýç¯õræµ(zðËÏÜ9þ̪)·Œ(šzôÔÇû>Ó´ù‘wéð;ÈÉ l<¾UKå­¿&">U͹Ÿïkyó{'ü2Tz¶ó 7ßuÝ=Æ&ˆ¼U£õ­K¾»ßÒxÙ¥ëÛï¼âÈÀœáއWÝŸ÷ÑS‘?6ûuêS­Ú_(¹T^pwG×’ò÷Mhpkå…Ÿßþõ׎f>ñÅ_o}SþÊÙÏ~íXœvã¶‹ox·Ñ¿nÚvñÓW6tû(ýæÒ…ë×½ï«aKÞ}òõ‰/´þeÊÄÒiÝkÔ;öÆñWN;}êÝK›}ö†ã…w^hí}'$éþ“'=øÃÎ[OüQºê|nÿeÛži6¨ùk?}8v׎÷"~:W{øðãSfmo|ùÞÁ«~]óÖ_?öEÉÞ?¾wÇW1{¯ÛŸÛöãZƒrŸw»Õ–¼ôõ£_®r¼z¾îVŸÎÙ´áý~ûhaÆèÖ5ÙóðÔ½no[ûÎÃwöLÉÜ0úæ‡nK2¼hLhÒëP|¯H×½Æ?ðÑãu†gœÈ±Þ»ëæÆvLM=ðnîÏõóÇêM¾µ°öîi{¯7U¾Óf’µÏ¬7toÞ«ádÃ3x¢í÷ RÛ6‹:ÿñ°^‘ë‡üèݿܰ°ó³‡8WüÔ‚­üÞ7{ßíϵÜùxÞŠ©mŸ½¤íŠEœ ›8Ľiöί{?ºõýO\xsIæ“Og$>»rÌæ­÷~ÙïÛO½Ñ¿ãÌ­%c6oû8Åûfß˳_,¨»xîû¿„o^´õ®~áÑß¿UãÏ¿z0´ï¦Ä[îŽiûúƒÆÛc;¼òá …-YÒp×ÖE·>þµùÞŽ‘ß7Ÿù@°Ï¶¿‘²bäÜÑ“»if¼gÆî²M¾}ï»ÇûŸ¯ýÜÀ±?ÌÏϞسçÎÆÐ?o*y!õò¬ãÆ)Ÿ%|Qëãä^e]R½u¤±í"k9â/÷KÉúëTÝyÝz7Xr|»uûÊÇ­oÛ_øÉŠˆ'¼O'/µôTÛž'mKo›vfMÆ¿Kÿ SrÝÁ7Nø­Áô.Î#?Äž0í‘/k|=oóëã'ŽïøcâÒ²Ÿ—4•Þô8§¥¬ª—î\3x_¿wzê±ágn]ùòÂOëžšèyiÊ£Ãn¿'%méÖÜcÌ?E4u9Üjh“½µßë} ¶ál——zßñŠc`Ö½ÍÃZǬßy쮓3™|käõ‹Ovx¬qúɈ!·×Ÿ{du¯ZO Ù’™sÛ ½{ò²äýö¾×u×é——¬˜•—9maŸæ¥íW?øËÍ» o8ÛïºøOü¨õôÁ'Š^¾½Oêw½üØŽ›2ÖüRÛáĽ=NL(ónûÚ5{¡Îº¨/Ó^/YÒgéŽÝw׸ãú’ +WÏœõìë6ýúìÂ/gí»TüÎÖ¼5ï}@aÎÜF#v´ž|¶Ù¯¿PÒ°¸0³iÈî•mj.X—úŠqØÃY+x:{'M:êí5‹n~U4À;/mî›ú­û"ÊviýÀúÂVÍ}öKObÂáuwî]±¦Á¸ væ6k7eNÛîosàÍä¸uõž®?ùIÓS½¼ó‹ß¦{ó'G+íÕð™FYsVOØi*þrú€šY?xO ë÷â¬õ‡Z}¸ö“{w|q{ëÂ¥¯Ì²­õF|}bø„Å_]lûÖš‹ó{_\a{=í@øõfŒ<÷Éç=WñÉæw¾}rÊ›))•‹fohujÒ}¹iÛß=Òfh‰´©ÎK+#½ue3÷Íg–Œ=øN·ôv¿:þNÃàÞû¿¹³ÍÛÍêÕ{Íc¡OŽ/7nX4oè–6Z4yöJç²S~›~ðƒÆFKÿ0,lù§Ï&4òéÚ?l8Y?nSÂô ÓZTþÙ¨bhóW4úîýIwmnñWq«’9Éë?¨³u£÷‘Í ÿúÍ÷Ñ€ŽßÙòä7ç»­<óëÞö¥¿ýVtaçeS÷¢èCûkÞ¼"¥ùÎY³Xîø0o­íÞ&{>[rêÏ{6Ô)Û]?ï»ù¯ÎütàÊ×Þ3dr½ Þ’¿˜3ÿó´ù¹GZüþÖçÛfÖm–ßrÉɰï[Ùmö¨÷kæî¿gãoS¿íÛô‡-{òG6~¤ù­wšB+Îök:ç¾±9Ù¥zÇWÕÆ'§&)o“ý£–AGëw]zvÍ-ŸË›v±FÎÅ7ÿãî+ Ðli¿Û¶mÛ¶mÛ¶mÛ¶mÛ¶mÛÞ{¾‹ÿü3É9“Ìm¥®VWºŸ•Õ…‚¾Ï>QØK%à¶„z^Èo¶Ô§YL,™Š×Fšte4…82&曼"’÷ûõ’—ÄIÑþ¨æ×P¶ÿ|âùùtW¶çËñ“•;àç÷ûtõðs&ÅSÔ]vÇ'ëç‘•äZvç÷í«ôôcm+º7÷YZëÎsðâý·õ¦×§»§×Ǫ¤ƒS½˜%Ùw;$Ãwkæ¦Ä–{gqï­´D·r#³õ.~ןEE•x?«£­»`|†¼+|Ð"xð€‰ê5Ȩ…µ éfÆò{k§YпFN-;«àd¨wTL:i[¸ Ú†…· Ÿö´ÇÍü]ÍÓw÷HŒé E9wüøol~ÉBK´Û}3vVó›Røê’àúᬠ:sE©çþC‹JSÕ=Úøƒù/B.zi7ÊÚanôcÿe/NÓß0† O®ÒóÉË`à \ét€K,¥Àôq2ijø«Ç”ÕÄ´ûÎÁîîö¶0º¡\º;ûy¼èù\™™¤ºÎÀÇž(;n×”Uäjô¦GÂJ‚IãÛ{®Fs˜~|‘z!{ïO³­Èmi=Æq4¾·ZqˆØY¤H[ÑÂy!!ÏQ+zFÆûÜf"õE¤‡¹gÍ•“3Þ‹ßž¿ãæó­ây’·®÷®±¯·(ZèÑíéð_0#<øBHžµì‹$,ϰ?P¾›gè¯JÇð¦ÈØÖ¤8/ÏHÿŃ[h½þœ×É}¦D­¹>ú ØïÛ8êÁF!Éû(!3;(è£Ut\ưaq"\cÏ€Pî–*¦à‘*åUF;iÀ @Ó/\&,‚³ RþƒyaJx\õX>K@/ønaÁxbý”ÿæËpÓðš‰wοÊ‘?coáLŠ ¸Ñ°™ðk&üâ>&!¹…·á þ"…]d¤`+\묾ÅH­~(ˆ—D<ÁöE&³Uš0tîõ† žŠW\*G@R9¢CøÖiöp¢â´aKtúñ[áÄÐ%Ò›²f þ¡‹½\™À6Ü=EÒÊÜel<ÀSðdQ¡;ÌÊ+pí€/«Ð…C&ÚêÆd°£ºšH›»Û»Aó€”-œ[k›„q‚ý1bŽëOì)Q ˆk9Õµ æÑéVLG°îž>2'ß ¿ÿ^¹_?z4  Á#2¼L¯#}] Ëö5[PRFhøÿí£Aï«KR#zð]) s—ÔO›.€“Ùc˜V;ü¨Ó)ø•׿¼^B€\f Üv€”c¡¥^hÉŠELóHŽàFʺ4¯4Pì•J Ñ -V‹«>%ø•c’l­ëÂòº£ô0dyïf¤"!혣´,ÿ‹P´^-[[)TÓ…±æ * àV…ˆ… ,«jÜIZαŽD]ITy½ÕF÷ŸÀÞW%¸¸Nç~@o¶gÜMÝ­ZZü(â=¨,“@:ÏžÀíîX"¾m‡æ6EŒŸ"@U©>µå4$eh’Æl$›c`ÆCP‚’<7þtÉ DcåA”¹BJÌt•`ÖA("Í^Èç”И±Ižf4‘?5̯ՀsSìònžvÿI¸_·9Ÿ ᶯ4µj¿2AŒlàé¸Ü‡È­Èw¥“ÚJkµjê()ñ‹ÌØqø©)ÍS6ý\ICè.ª刷fQõRj «@k8$\¤A‹ªø€Ubº'!ïº<¬5F14FÑS‘_ wb Üí˜?&³~{nø½õ*~­®ÿ«œüÛ—ƒBÜ]`b¸Ðý> ºÝaÈk-­·Y„Á@*Á¸¢ëöÆöÅï¼µÂ@1iá(³i¾#·j¡€ºáW3Bg< ´ŠŸˆ‘ìŠh°±UϪºPUâ&ŸB³šëŠªž‡. “°c?ÃÂXf‰Ôõ¯×F±>%Bâ¦!6”Ôô­„Âõ~þ²ž\jqAþHÙ‰z²¢º5Ü1–x(e‡|Ü£æ—Nò‚jþ›®²3 Ì@ãbËײv (&¹M'ï’ÝtRòËKаê?A%íAotÎÈ ˜ñj)v½dÏ É9ºÏõDfÿêfÓ% N©&M€1hEê¡3¢µÀ=ÛÒaé?ÓZ•Õ©{l,X $³Ñ½†T`ÌÞ”n#I^{áè—¦†¢=ž XºØ¶u9É+çÝK kIOqÜË&'I$|=纖”\çSÃLÎ `”´É˜„ç&z4[±*‚AÜ̈chQÁ7NÚ‚/ÇÿžØ#26Í8íj7¢ûúJ0¶ºíš`¶ ;1çà4©®Ùã@…Úšíî¬Zdq( SyÝÕ™V+3”à*¸k'ë*ÝYá~±¹'/d+¶x#´B «[7$‚t[£Ë–e fB€Ï,4g»€w Ð`^¯-õä-'Èrmeªóà)ÜæV Ýñ‚³?ε£í0Oœ&Éýw†‹p?Í µÜqîØWJQl»Û¿ÁÙP¥ËŸa@îœ ñioa;X Ðp2èÝ,`Ó°~ÉÛ_æÀ¦X±´+ýà¤Õ µÚè'™6Y{BzÜm¾\‡Få{U,™¤Fà -\_7+œNeÙ²ê[².„-®à2 à.`¥ÈO±—,‚¾S%¼§AgoöUŒ>Ì«CnߢK~Ÿfº7qåÔ¾#åizñ|ÏHüx~à´SÔóþ'‚ý¯ä“‘•…ãWÙÿ;Árü¿dHšRQ×»suç.úœ”<ÔX™ mfž ;!Ô‚®ü¨Û2¿¾—e$LoÙÀ°dÐG­/Ô+j*•”:wi~:Ë"YZÛ+Ú¦Pk'Wzt~ß;'yŠa~}¿?'‡¥WÞ»º7'Gti{|/ Ï'Gv~7¿œ;ÌÛz/24nòl?¨»/0÷(þ›ßGptwxt}ס ; -#±l%wìÍ}™XÒÌÛÒprpAp'‡çâ´¡`+2ÒÓs1’l×?—'Až£å,ÔÛJV•Gþ¤*©Y6ŒÖæ?´Ñä9Ɔ(¥XÕJëgß.‰!‚Úå Ì'HÔ  ’=OÝW8¶÷²GšžJ“ö´xp%L©X3TJ â3%+Ô)ê”=0T?Ž›ÇadÅܵªNfø’÷ZFbb/U Ø¶(Šé¡¾Ö×5»Æ²ßZÌ2/åi#5yâur¤Ú_A`™lÌ©%Vè,ð–»CÙ´˜Îz¬€Ãð¨4„ÁÇc|£ðšQ‘ŠBèãÆ²8j,ÂÌžïÉù‡lƒ‘ñÞ˜Ú·¢ƒ¬;ˆg uÜÇØíð«!Åðÿüæêÿãd »ã[Aë…Q)©ÏF@/(µÇc§ÿi§÷Ÿ&>šW«"ß±w}ÃÚ„-ÞˆFcƒºþCäK¼gb xê–;½f¹/œ|AU}ƒž.Ìõ<)ÁîLÓØC%anæŒ÷\P™¤ÒÑY^ ÆØ¬ \ÖtsÃèeñ¿mœòŽ„šÖbe¶¬Jþ˜¨_]2Aù‡É_q‰VT9ˆH„VHÂê½=dJÃVËZq/£.؈q}Mèão2O–é+µÒù¾B6eóô¦xGbUk`ÈÏÌ{Ýá¹U÷/2©­3CœÚVÞ$–W§ ’™›0Ÿ:—pë óÒ/X£ÎlTà}éygl¼ifœÖ¬`¡};¾]©Œ¶¤Ó Áz@w¼¸­$ò;iæI¥"—6þ¾:5nO«X”‘¨Â‰ ÀAsZ¦@2i2n'\?XaÆÙ,ÇÞþH½üµ>ûy¦:Â} W©û‡7:wÌׯIõl´¥ Æœ†Iôð€ïX^®›¸îK B–…;ßà4lû»k |S[ɤT~÷/áUmoŽ¥˜\ˆpäík¸ c8EäúBQ6Ñw!‘,³PQE¤>¤`·°jÌ»Aõxƒ~¸\B—±{äà¿uÊ2Ò“¿Q:¢‡1LD…ÅbÊfô0éÙHt‡©¨51@E ¢*Ñö$†›1Nˆ 2q6æ]´¢÷t¾Möï¼.r¦$Óùï¯àd Í¥Ýr 3ÇyŠ‹¨²³]~GªP]?{&3™” ÌØüÖx°e´æ9'í$á9ÇM|Té1qj¯?£‹Iíb|üŠÇZ £fæpZgþû:œã¿ùSÛ=ÅmbšàÍ}­ÑxwÉç4ôP1U²Ô_#Ó1UïÔ[•Õu¾£Ížb/a¨ªÇð§æM|da?º°fÙÙÒ!e ½Ó'õóõ,R„”g2òIÐ¥âÁtð°6¼« ;Ù.â1¨JXÀ¤¹õê¥} Æduã‹Q¸‘ð‡•‹›òÕafذlE©sf†eÖ]^9ðéÌ$•0œ­O4ïåe,"Õõ'#ÀÙ/“¡ÊÒ ãë‡êË÷× ¦›n\6÷\_ºv[5q²BêfÓ׆¼GJœJÍ{Ói-¤ô\ÂE:—Bë¿T§¿¢)ý—u—ÂÇøŠi1Ÿf$÷Sa‘ÍÅ'שèuuŸ~5$>Wt«fQL‘öȬ|ÙW>ÇMk&ψqrn !¹r…Q¶kMçC"Qz.Áâ–¬Šõ¸&Åh?©ú掜 4òAEç>HÝC Uµžål;ð÷Üí¶ Ð²ƒíôþVí‰Åä-bþg£îУݱµS þQu‹¨>€Oø‹-4¶Ð÷ÑsÓ¥£B±S¹gd =ÎߊzÙ–O,`HÖò#¿Ñ’Ä4R $¹±%êgümô•¡™r³†æ>—»ì^‹œdžbaÆÆ„0 r¨ß$¬iæFâéˆØ ’V˜°$¿ÅZÈ'œƒ4i8“†‘-F5n3G‰çɹ@u¢`&\á9©V>3ÉÔ—•®ˆ¹“gëOZª§_°Âäß4? À3æ™.#žnÅœZR¥t‰Ϩ ”ŸÊ…ûn`ðBD`sÓ  ü%96è° ”¶™ÞñºT&/§x)øÕ¦m‰Ÿ…‹f*.éB$C+‚.¤IGS6=Vj>\ñ?šÀú½.Òášcd–Óo*ÕÝx~/«Š”>.pÕ¾—[–DÞ.d_ŽÎ¶¤ƒuGÐ|æ8ýÝXl S¾~7‘vPy Ø{VHд㠽ª[µDÎü0fsSSþV½9ã’ÙmꤕwôiŒE7v~¨ÔO.=QÍ“‚œÃ¨˜v¨ë¹7Ðd¯8ÛËnykHo7á YEÇìcÈÐÃÎÜïûòJ°)»§‡> Kžæ-Äÿ1A»®3_è{O¹ fuiب 5Z9g³¬ÍWR*máT}%Ï/‰Áø8µ¤DxF”TðÛ¼(ƒñöu*kiÓw±Ë—?î›ÃÐT멲YúŠPZ©¨œØä±/ÈE/ï¾å÷Bc66¢\JÚ÷7%tÙ#$|,éSÒWF}°<9=7_t&íîåh’?Ò•¶Î‡CtiÿÖ?òçÿ'âû¯ÿAŒÌ, ÿ•‘á¿#ãÿË6OBƒå0=tèwÈ© ¶Dëp4­PPØbÍš`˜ ÝÅ•ZCâUË38ykñß’]pè~Õ0pr^wÃ<#P¶Ü½;žfÒÝ÷ªtÉCÎ3P2Z»zyØã'2CY*áÜ£-ÜLóß oú-A—JxÁè/Y¢ha¿Ëa5D$1¾*]²fì9¢˜£„ÿ&lEËUL<—„BiéÅbhf¦â–DN§†eÈ7Š=1E3î-Ê®¢€äçhÇz7˜¸¦È¥I'¶Ã®"†F© Ù3îH‘ÙâBÓº ZW²½•ŽcºïÔWfÁC'µÜ±.ª+uÜø•5Êl›kP‰³ùwŽ0lEwÕ>øéQpôÿ <ãÅ”ŒìÿÇbëž™íÿ"üÿ×À€ÏJÏÄÊþÃr·‡7öH ®ÊN“×ÜnÑK3S¢(ª*”F#†ú¾L#B*Z¿uqÿ:uÃz:5¢ub=p{ƒµu<pˆ•8øû!qð?ˆ `@Élí°x}‹ÝŠÌ›ÇÝ;ŸÝ>¾–ÎÖÚÛÚZN—Ù·lNµhmÞÅ"…a«ÐÝíŸ:xÁ:¢`š+Vž×ªjE1xÅÇù4/ÁÌ1ÅU÷‹T(^镸ø‹6pö¦ƒo–«;ž?;ÒàÜ`nËõ n3Ö ÏÉ00i: HWjƒñvf§´;í¶3eT°ÚÑó~ÖþWpÚ­¦Oó´¢ ‚|ût>&3&gn=Åï ǹÓ)BXt}†(74u|»¿šEbË!rbT cÌ‘:!}ÀD_HœáPü˜åŸ`~MFV"NºòíŠÎ¥Í¿ Ë{€s}¸¢u™Æ´îSL©Ô˜šî9Þ»ö¢Èî¹Þq ‰¥‡cœêúN°#™¸'¾ïwá5XûÁ½Q<¿hßwǦÎU¬Ç²Cé<¡nÿ¢§ žößi=‚¾c»2¶òä·ŒO“2è&· Ë­yŸ‡RÎ!ÎCúôÁš èFÓŽþBSã|²ÚáÕŽ7ÛK´ùÜ°ÜØÞ”nûÞ~%~uFG0G’*Y˜ô2><:º¼‘ µ!n" )óEóó ü3i^akEÖŠnÎeóÓºæiŸ‰½ß¢ÎUž©žq?‘Eÿà7º™ð»°q´ãñ…øEëÖîÍ Xî‡'.JnJuÄoCíÓ{Dy{t¸l«Ú"džµ›µ»¹?yk(ÀŽÜDŽ%×UV‚T¶\®ÉIˬɾÉýقδ›©Îls=e𥦥µñ‘Ç0Î/j¤¼ÁðŽ!ñGÌå«ù»ýe¶¼Ø¥ëƒÈw‡ ñÃP ·¨rãÑåÁŒ1²«î§ØWX=€²·*xõ"ì÷…ÞŒ±láìÍõ?PûÂÆê¼=„½Ë½Û‡§ÛóÏ i(ÒÙ=M:Âb£%ÆØC' 4Bˆž3<2sðïVkK9YµÜúÞ‚mKmgmóy –½Sï ÒÑÛïíûÍöÍø€“s"c!ÃQè*z&ƒœ0žŒLÚQºQñ)æÊS¨ôT¶TZ©ØUN^‚ÛWåõ»Ýý!NMQîõ§QY›>òNC%`)nG¦ ÄQL(Þ*ÞÊÞ¢EBÝ3ÈÞÞl‹l“œÜ9æ;ÙéÊê³hó/Lëøßò›|ðpÈþŽo=(>2Ä>éÍ%Æhs)0ö~JíÎî•¿ƒB•k<Àûê=Àp$¾ Gî†iøPuÀÙ~Ò9"tà¼<€Œ ž¿ÝQ± ùÍ‘´hÞÊü< þÅxXaêx§g‹I¾Kì·/­Ç•ÞÓc¾ácðÅí !uŒÝ†-`’nƒÞÁ£ Û9j.Ëm… ÇzM04F<Ò« £·Óͯ¤qäçlôa<£¤¸%¡ß7Ãî“ö ¿ˆ ¿ôññ> M6 ØQÙ‚qz¥K8ÉÃÂįÉ3MÇÀíŸ µå{>cÏïÛ‹õFÔÒľô¿yùû!2ЉjTÙ^·Ú€xà³ôC© iñýþ1íî¼ÏSRLg ;äêvÝý›eÿ~se_-÷Ú)Ù®{'ÑŒÎXœâ?é9 ˜þ°sïÃjáÿR- û@nÖœå†vó¶2Hh™À6,æH Â¨Ûi)ô'èißÁËyóUþëcÓòfá¿"h¹‰¹ŒëÌ06ö”¸f¾ÜÁÍ\w¬Š9®U?;ʹZƒo×ËUoŒ[Ž~=O6æèÕ¥ åˆÜù´b¢2ßõ5ÓÁä«=²G´3ܦSg3)sÛw,Ø–ôåJèj-{‹m¶æ¢ìŽžì çi4G EÎäà õµWK®ïò Õ-M‚¹Ëç«fñ½¶éK¶³ØÀÛ([˜¬Ím0êΟöZ—p k|æÇçèÝÔ½@¸/ÈWéäjÎe>q ªÄAæ|z>ô n°¿½-®È©ø­sç1àáø6¾•î dL‹§à—å©ÛËÚü‹C+×u"ñjJm±‚'°c±Æ+ºÔRÐŽÇáwŠz¤rIùƒì e 5#ZêŠØ\B¼·"ôŒ­àæ*3óûŨÝÊ›'zLä(óuàF)êù×AD\bë/Vÿ[H)Ï·ÃX¦Eá7Ö~—ƒêËà›Š0§ 6 ­_9ˆ5Vkë‹8<Ì(ð“çí8É¢TN.oD,9úwM®|™¿¿X*çhp?Pr½ôÏ.“¯È­á|ÙfÅÓNÖ—ƒïLꃎE®/'ôIa±Çùäõf“Nç Ø´ÅÇËLür›·ÝgFØÃÙFQp ÎÞG{ïµ v`㠜ϭˬû)Ö÷=¦C<+Ÿé-øÞÆì.áÁFJõžÿ¦ÛÆbO7Pâk?ãÇkоæÎø;#QSÊB´#ÞŽžê‘¥êŽŸˆy²ö±ÚÿdMFxr¹É½& ?…Ñ 5šC"Iã§(»ÐÇbŸTÕ/SÒ5°Ç_¹;_ÞÉ¿Àëþ›ûA_ÅÃNñéX싽XeaØ7ZÛ²äP¤]> I<ˆnàŒ€Â{•qÇ1Õ¾»Ä¬¤EP“@@‡Ö6“^Ÿ.NV|ûB# ÷ß0ÄÛš´¬b RÉý)¹Rnrß‚ö´i-rݽöA+orÇXÕ&!/–/í/Öœ]÷]hJmz|„˜T‚¢µ,wšF9k2ÆÌ¹®ÀúŒ{WOŸëÏh‘;w¬ªï؇í{ÎúփܠdöãÈʨ¿Nõ@%:#÷«>øG±r Ý.¦&6aAü¤.–ûµp¢V¼–ýÙv£×x=Ý´4×i+ªèTÆôFFÆ4ö¬}²ªf1Æj–433:~5ÆK *†FÁ‚ T \ Ò{eYÕjÄıÆXÒ±3"THÀA±Ï˜|²”ßµÁ‚2Ÿ³¾U½Ikr#Rªâ„bà¥KÇ+M#[¶½çˆ8w¦½èûuþiåÌQ8Y²Øq¨çÄUTšmˆŸ =|æÐÚ{ x0ï `Ô|¢ðÄÕÃý?LÜ"ópÄ §ÎÇO¯ú zpþÚá"B†õ¸Ÿƒ¥Ð+ÂF°b½¸þÂ̘• wEîÍw½¶Tñ4x'¥] Áǯð’D¤ZÔÆ˜Ub—Åì3’H'H±¯%Çe'ñ|”ZpR¶ª 6‹ž©g_æ,âßVx=ÿ"޾,D¤Þ9náÆ'~X#Jd#%õ&Ë•* Ô߬ò5£ïHàZd[åÚ •ò©oF„$PezÓ¸ÕÓ¤<_É5˜ÔÊ_ò=Åg*Z8%È0‡ȸ[ÅqT¡1V3¥%Eáá%msO·¹{WÛDjb%-5©),ˆE Zü¶Äªø›|že¸¥ê4©ÈI€Í£2BáÁÃð‡%€glQ`!¦›î`°+§<Ž ca0A1‹×õ­ÉYq :Gù›šñëùÛWVG|P!i_Å \ Û*¾/=i•äöa7*E§FzÇÊW,\:ñ÷ë¡LˆÀ•¬…ÜÉ|žÜ»ñŨiÝÖ /Zb5&ÈTa5@ª€#$® ª9Ñ},ªÒSÿ ÃÃbxꣻÂ.Ãñ§¤ýòºþõHŽY1zÉØÒ!N㜉HàkK.'k~}èRˆ°r¢&"š#›³bª Ù· 86¶Í";Žfð•±él-0e?²ÒöÛüÔLLuJQut•ùµ8õ®acúZüؼàbgøÚ7· (䬈òèý‹Á•ߎÀ&öS(Xäñ‰O¡–ïT\_VTDzA:{SÐýý©¸X”»Ê6ÿéÙs.̽ Næ<ê´bízaî<à[Xıò©‹Ø9¤Q)¿ï5!qbk‘òYx‹ ð¶àuÞãBˆ)@XÄš X8óá7#t87·HØÃ‹Y1?ïÅÏÄìRþÙ+ôì’lÚ ®ðÍsÂ4ùSÉâ.Òœ³(÷,i‹äåFõGþV¬É¢’µÒš^0sÙ6ƒ”ã9ì…-~˜DSE>Šl,ÏP¹^³œwŽaУKÿŒ mê‹è(ÀxŒÏaÁn-¡Ë»¥È -ª{izœÈ-A—l(Zª.%¦93 ™BÑE,[)™­7h[s.B«PöVI€¶„èzÁ­{¬äÉ¿OJ¡dÇ0jO€_Œ_êÍSÏŽräp é%òhúDÝÔ8×ñP÷sÈÕÆ·ñ5î˜Dìi³ž9«§ªùrcWG‡ztÜä÷óñà%š•Õñ³§¹*(´ôp±·zæ÷yø…‹ußÁÄßžézý~…ÉéŸ}2gŒW‚¥¡¢â:Æ„Q¸|UA\Ð/)ã·)…ÿ¦Ñèi…GTñx÷Êô©Øýo[¦ R# Þy5)¹DP#¿ì“ÂÀ²RPNb’,DMÖÈœ¡*D1|FTHR¼¢²;Jé”j:ÍH>9#¨$ZàâEV,E`­_+ úÒ?’{.–ƒ,g,;iôýbu0æät1Ðõ–¥ÚîÜÅpcã Û–c,»+¿P2ᆰÓÒ~°.:ÄT&³XÑëyel^Ýí yô¹]-ìEè“á‡ö‡¡G°o­á­ 9Z~ ÓÚ@¾¸å„ØmÈ]‰`ÜŸ1Þëf$ Ø¿Îåt®¹Õt{Ôÿ<6«AÃá Íf*Ëû ¾ËƒŒ=– šÊ2€ĨJˆÅøàú<´ÀÄÚ™”µ14_|ŒŠ|qRSDÈ)1†–8¶^`e^ $àAon~3HÜ+cÑ,L·xÅ‚F¼{8‹\AöP{}ó8+`4EbIS’³ê(™E’ zBk⸿à '€’»2xp@?Q"LN1XÀÁ˜Àg¶÷öÖ²°¦&1^'&fÒ¤‘è„–¤€¤«ZAãâ̇Ôë"S†ÝXŸØ?Y:ºüÁ¾XQOOßAɉDöŒÒ|ÐãÈm8xoSÅHüF§@z…ô#èé™¶Çš·$X>FפÅ|ŒY]ép?µçFs¢t•Ð-B×Îc°0›:"+ʈðå!Ÿ86¢kبPp‹3ºŒœ he°!Ã;Ó>’|c{h¥UYoDgD0èe 3‹dAø&›£¥…Æ©÷²Ò*ËÂnΙ²o ‹Í0ëPÖ³ ¤ÕuVÚ^SS÷‘±rÄêVB‚VÉœ¼©X­ äaï­ÒC[DÑ Ã^…q¿ÒØ\õ"<¹r`_>˜B½ðùY&yoÛ¬%Á$þ}ƒ Ùëã’=9p6ÛœG N¬bº1ØLG7‹6>…LväA3íQ Y*YJÅ'—·îRp„t„æö©›'569Sô?-ÍC- tDàŽÔa¸kDZÌeš›”7²OÌ5,w”o”Oágø¥á³üMnÑšR"Í$ääµÿfÑü\]Ü$õnJ,Ø,ßۘVi6 :R€b«xuN,8fÊû.«þ‚,6\$ïíUW+BÔ1©¯ªwþhädzp’<̘9ç@€ÚCŸ”Lj†¸ ˜b’¤*É+ÑYt`‰¾4=Š\„9¿=ÉÅ2 ³£v|‹Í÷Š(úmç3¨s»O rFÇotÎNËKÕ ¯ë¯áñ§­ÁALŽ«Åò½k•÷»‡n“ɹ€™ód¿hê@í$Nš•°eéœqXB‰&OIc¶ßÌøk$Æ=È]Ã5ÊÏÿ+ó½öáW˜]¸Ú…Ò)‡£VÀz|CŒ–TÔ)[Qù8[–61>+(3‡¤áœòTó mÍŽ_­Hæu ¥ãÆ<ËÝéÖ×Yïöë|a»öóás-§.2é¥Ï¥—·6ÏñWâ;éJüÇì\oÖï˜ê^ÐÌÌ•äÖvN÷zûÚSÄ›ã-õ]õè!(AYH¦^Ò}{ Sº n¾’=4Ø (Wþy3ØFÜ™àqâ±’›-­åDØŒ–ÊÔuóÖ¹K[¶P¡ aòÄÐÁÅEJU‰H¢l!›ñ=^; Žœt˜tJn¥æÃBÁ3yxÔ¼$þÚ>ÒÚ¶¢ß¬?òt±æRZÑ!ÇÞ0DxT1½ =”™û3¡Wc Ì^$U€<¢ˆ“25I1ª~Ü[j6‡VÄ#‘2Çüúý_>¨€Ð0»¹™ó“]ÇÏ6ÒÅḬ̀ÓÔlVƒîHÞa‘;sáõ½¨gÓRR ÁØóÞ•œu'­=ü¹˜À—§Ýþ½d¥i,ÔŽÔ—.îãÙ¸ oèö‰‘W3 ôZ©oÜw±hý¹fÌ„â°æ8û*Ûvy»†3fˆÈb‹DW%„Öà­y#›zdîG*L&Ìf”6Zûšk1kU›y[{­£ÞÁFöféÎé'égé)ï[ÏÄÙÂÁ Ød¼h‘,©RK‹f5©¯µËÆ·jråQÔêu›ðJâÔÍ슥2Ê·ÚÑK·PYtqBìÈ9®©²“ö£È£ÖÿêQ{:iü(›0I AO÷ãªÃ÷”š¼f¢ þwEÎïåÃ8¸~ð%à¿ZY… ! “>Æç;V>2¦O‘tPm•€™²j?I”¶7«+ѽ?}Oæ>êÃçÏÇâ+¯«[ŸóÍ +Â*)7Ÿ£ê´\­=¯‘û_<?‹±m^¸3¾§í¬9¾zêExÌÚKz„¡9]—!ҔʗÐÜ*\7§™¨;ÍIªrÑ–zî k͸Ù}ÔYÂ%Õ XÁ*i±‰[BËœXê¨ú„H|\ú¸ê"væ»|ü?®RÂ³Š“Y1–¹ËS)JÔªÅé|*‹}Ôsa¾<ñHO ¸•)^<˜ÔRƒ>GÿÒçoU„¨jeca‚Cµ–bwÿž%0¦Jrš|e%†{ÓGøüýÈДìlï8Ê»@Qi•´λWîj‹~T yLí§èÔ&bšµ‚«Ç¶ü-—6´Þ¨" s›–]‘H‘IØŠ"S…¦íΙ²ULÄJð®(¨¨˜›¨9õ‚.y·¦SÒÎ%űXDsIÐeó âų°‹„^îèÒâ¸+I”•Žòs¯áP}ù9 êÍâ³Â.‹†vÍmrG ‡*ñá:P`é>ö5Â6ø÷ë‚góÌâ¨y;dõ,SƘ¥ „d¶ó—ÁÜÍÞòÐ4?\½ðArÁ=sêq|FLŒ¨{ÇcõàhQ{áü¹$×bA×ô݀ĜMÌŸ\ë Y,©ÄJ5–J@Ôภû:ȪͳV?“ÕBöÄë NÞäÒŠpæcÔ€YÉ¿‡‹" ÁeúrMçÈL)”-„;/ü _N*E,ÒýïCl;qkKsBÉÍ*RIþaåbãWžqCh]m¤Ì"r…¤NüC³H]ëê€|Ã’¥èÍÌò‰ˆ_2+« áa"tzt\\ /-÷ü¼ÍQrûº :Hn@óÑ9t©˜ÄÀ·¸×K gáÕ ÉÂb*ŽƒgEK·+IMôÉQö˜x+‚UþêŠ?K¿Ÿ\D’*"‡,¬Ì̪7š®e7ÌàÛù–Ê6æÍj×ör"øO}¨¶ìF/,¿ÃVyµê‹˜‰Ö̉8{/Nݨ5KÄç74BhË,G_ŠQg4všÎK&¦k0hmk3[ÝÜÝ7™™ÊN¥½ñº¼©àÙݧ+XgëÊ¡©¥^g>@pÍíŠA% U­LAÎËT·Â1ÛQ+yÎL>Ë#‚3ÚN&1!!ò€'`0àÞÃÖÄÌd0 úBÔHý¦AŒWšñkD'ˆ`'è–ù,Œ(E¨RßpâÓã€Tˆõýb-²nijºÀ&g|ù䔎~ñ40Ó»«óá…´=½†Å@Õ Xé"`õ¥­‘,MËØl À\“ãSÒjÁA÷r%&à9GþÈäQŠý÷03<› ;é¿K@§`4pÀ®©¹«–Rµ‰^-õ›Ýœöõ Ý ÷ø Þ‡yñ 3/£jíjÒxIq2J§}êË5§ª ,óÄ-©œ2D›T2—Ô(ÅáÈ3©üiÈÊ®¼bê” ºŸòï]ùêæºðîO‰¯}-e£ÀÞü`KÅË2ªAZwL1'ˆ¶çO?ƒŠOŠU‹&ß$Áu‡çÖAâgíÁ ²XT‚x:UcŸVÌêE*†ž<ÝœãÔÂ<,À L˱²O–Ä…"ñïhM0&Œ%¥•6æ«Á1ùBp0ؘìãó&SÀH¤¦Q+”퀶}TI÷c†,š…VA~̨–á&[‘¿¡äQΪàRYÁÈô‚°HX£ÀRâê xD¤5øá›¢ÁÙHïJ å~:eK”RÚ…ªiKq— <¼Õ|…dBŸ–(öœnÅö`’íÈÐfòìY"L³¨Ð(=Ü™SñÓúJ€v¾œïçjèéè¸/²éˆ¥³ˆ¢å¥`×0ù ZDåJÈ<íçdK%üøØýhÐhXf.¼™üÈj5’Ð(jƒZ-A”*|Ïsããô,˜Ñþªâð÷Äûü`ñÿ–FAÍùàÛ’é³y1Àél) ;Ipb¦I”H• )Å_Ó3Ó³I6|'צû…ÉõÃUþµM^æC‡V†!+§¯¬HØhVïRt]Æu=_ê#ïyÌšØäy7¶’¢¹ÎäXÄy2ù9›xš2ætÍ'o¿j2¡"»å¢®pGéÚ=!HfpÜMfꯖÄ5ì\NA™Á¾±E°c³^e4u0¹Í¾¥ …'ÿq,ð€¡…J’½Œc‡Ò^¶´0/R^4¯Eª¢%¶§4ÀW ÷‡0wu’ÏÔÞ‰“­¹Óæ6ß‹–x¹v(uK¯Ï×)”—¨wn‡×C*ˆÎs¦äLš†TÐѽKCÖa#à ÞÎS`ý(ëñ6äÍzÒœüÊX¶8e¸L—[š$Sª¬g$²^Î5†ÍpCXޤŒ¢Ò1‹Ši aõ«ú¿­7ðv0/«ûÓJt>«Ü`[¶PA@…¨ ²2ùIAÎI´Ì^ÈÖHêX÷²»txFyH]’¾UÆ­ü-£;:}>þ„HAusƒ™­hs4HBô]¥bZÖ²ªÛ;LÇ"H1Î>`,¥Â£Ø$I1¯®4)×`¯Ü‚¶<"E¨S ß‹+Ò±ðÅ‚2°l©L“Fîë3éP®®¤Nv²ä¶Ì—ÌW 4=Û¤0)ïw'ÓTB_ŠTÆЍô7‚?f‚B¬]0Z—”Cæ èe 9·$DßÀH‹«¦FòËe³LžâÄ”¡Â"/ƒ“¯°V„“¬ãäô=¼:…ÙayVèÞQ»Ì‰×ަ¯Ð£1N¥Vçy½v8êÑbJÙÙh8]ÈØÉò´¯ØJÎy{ëîâEè‡ÕÚò´µò‘a¶Ír70,o»­‚€ù2x§Š³iåץůœ“ IlD—Ä&±8:)µ¡ªYÝËÎØÂ,lW÷½[nªMÃÖӢqÌ F+¨ÚNonêR-ÕÒá‘ôr(Ž¡U2§š1Ó)]ËÐ3ë·–Èà¶q––4äGÐ*Î[‘rÒ\^¸(åHâ0xH)´‘ß—@V‡3Œ¶P®©øvÏÆý LÄ€íÙ¶žË_ TPÃ+tISå`ý+…ëñwob¤“¥UY“a`Ñ»º×b„TÔ*«ä5¹PÝ—B^#'T¬Ê<»úQ{à3\0Øì›°å”.õîѲ)–p™,R§ÖÓ+ù JyÇw^ïÿÇÛ †aX]›NÊL¡©N¡¡& €Šdóó™²Ò5Yé¯aHÏ—Œ7C%óA=Ñ6¨ì9sŸù ÿ®¼VÐ6xWÌ{ÌíƒþË^ÒÔ\ ª¨'¡—¶à³]î³Ìs‰ñb+u7Ž.½.Üœ-Z÷l©ÄWãÁRüê›5ruߘ»ü“ùÕ'¹îCTÝD ó ¬féàëâëdèt‰ztT°ó××óÜ9§‹ï‹Ý¯o‹ÄÞÖŸÑo»¡Mõ©Cçþ}ëe á«ëܳŸfEj?ž$ÀJñýNFžTS#îZùU…DñÙÆÑØl4q…Mc¼÷K2BÆžUÅx °o1=œAjtÙ¡“áY¥Äe”þI=ÊN‘•°î„ïpÕÐ\æ…ºù&ƒm6 ¸„L“„¯"©¯ì5…ž˜‡[†É¢¥ÆA©ŒN³ùelÃŽ ‰åÌ$F˜1ê‹‘‡›Ø@æ?*/ÑdþjBSàŸøƒò—éNˆñÏè4 É’.¸­â’®¶><ÒænÑ»àSú}ú[$¬ p1µÓõ0ǘ¬MéÍß`š÷÷)Úã®®ƒ®™©ïO<“ÿæÅ§‘k2ÔNŽÝ—¾ý¼lTÃIÛ1…ÙA bj1`I¤¦õØôÊfS4RñXFö^K3†ÿú%Yý˜2¤bUŸ&„!ü<.·!U±°å‘ZêDƒþ±ñ?'E 1TÏYº5£‹Rô=.O?Æ¡ÿz>lvaFÎ_­®÷œÚ3Œÿ?SÚz )gN35 ²Ðé‘ûkg$l (Ñ:Î6§duŸ5+:íŽçÅü pиšõÓS·‹ç 5æçã™@‘¹›Ü 6Ú[I¶¤Œ«^§ß¬¹¾£Ù&)œ³–4Uh¿% ÆÈþ!hp‡? L  …ƒ^ÒSbóMÒ¥ÃôS7—§“"F%NáRŠð|åo XåäHA†ÔQ\ D”BšE³b Úü’ Bž*þC¥ a;¹¦³’†á¬Ífj{Ñýè! !~ðRÂ%’HrÓ×÷K$EÌÒRÞŸdÉK%Ê!brè—|†éÚk Kv•¾º~U»(ê`¶QÈ{ùOq'é©~@xÆ÷üšÏ˜úÂJ²ˆõeö¨ ‡x°¥¦ª6‹Ëb~:þC`¡ó g¥#Š`Ý€s2ÀF½$ÅŠÎà,›D«¨MÒËRãÈôÂ"#™o÷%ã¥RÚŠá™ ¡fqÛÁDMé÷¤l Öµb€Y·s$ß*á‡o`³d¦Cê%Õ{´3ŠUÿu5@Q¶,)h«Ñfwí(™M‚Ù,MÝ)í´Eð}ü,\i¸’…gõÌè2I¨‹‰çæ<\ëâSËèÈÊØi¢”³ÀЉ@ØlyÏ•š÷Ö¶u‚5Ñî…º!SQ*–m“g#ÓNäå_‹ö‰ô ü 9*ä ~PT*dj>X}ÇéóM¡­M¡ÜªE¬dsÓ-S/š´ôtjni=ËKޢ㤙€¶^«á4‘\ekº; ßÚxµñÂvè¦Yö¨¹nwƒ‡wüìÖ+¬4g³ï¯v¡µbÛM‘0âüÏÛ»,–€To”È®˜ÐøÒ"G¢ ’ÇçúÎjÏâÉq™ ,›á­…8æhŠøjÇ×cíÝó@œ3ý÷f<(x¸«rQöìȵHýbؤ’*ÃW߸²1™„åçŽáŒãYk…þ‹=¨[ÑñŽç(£‹š× Bú×V RPÕ LS³››W5 RDFT¦XÖe åõé –ÇÕ„Mµ\µêï5VxÝ7iVz¿A8%¡TßÚíÉæ¹jÚòøp>ÁhyK­O';y³ÜNs`|¿cÓb^·LÈ´>Ïû'ûR¦½ ئÏëŽç žÝçÿSã®W˜vÄmÒ6™q6euÜ`þBAÉe :èÕÝ‹ì‹ïáj Qì³L­Î/§‘€ÎëLwázð>dNåXðÄž¡V" ¼Ç–ÚýIÇÊàÎìm©®Ï‚äw;öÈÈš@ö€mý4é‰çŽ`ÆC™ûLtê˜),Ò„‹!hÿžÙË7Ü¡'+*l©{¨då{ýZ,©ôo!ªS ÜÕ ù!Íç|T[{ÍITÕrawÅš‚Ü’cqú뎘Ù)±í B£ú…”–gF›’^ÄZHu5 A†ÄãÕ( GÍ•¡€RN–¬ö€+òi[Q­X0Lli`ѽÓS®‹K0MÇîÿÏ/z"Úr¡ŸÂæ2+qd£Œ%lV%F²X =åfõvNŸÕbátZ&Ù\5 ¤muo`Tôì÷ÆsÅw¿Q[<Ì‚¬Áñ¶ÜmºÎD©íþÈï0­Eëý½w§Ó•¨6SºÀét¶wM Ù=YÍuY<„há• ««ñ®©3¾‘«åzw²j/¨€QLzT i…Se¥D%kI{#G¬„]áÄ®1¨ªF&ìn£tZIà ëFû઴¢¯ÀA¬âªÕ—²î ijì#¤‘#JÛÎ/“’Nú™¼sÐ({×£Ct S›3[jè)õ§-s°((¥Û´nŒ3¶äB¢µ$îJKå°UíT—jŸa­åxIÏ<^æZ¾«âÐq`ÃSÃS Z‘C¿áÞ½ˆ”šHÍV`}Åä0FŽNLà2ÝD„y.iØÙÓ})ÁÁŸÈÀ57ýG¸$ y„ ©8Àp®Žu“¯®à&aé…[€oó…¥à`{·M.‚ž`èÇ"$ãK4}EÄn)DÒ“C‡±_âçâmTªRR¸X€NA…ðłÆ¤¬æ˜™š ‹…¤kŸ²ƒØiŸˆ¬¯ØúùíM+Eâ¶Ú“^£L‰x@!¢æäiU† {¾Í7h9CNó€÷ PÀæ°ªËþd+n¥æ’ÔtÂ|pEîr–1a–í¾Š#®NÐr®š‘ì{N[°!CÍeÜŒ¦NÍ)D:šXÅRñ¿÷¢< X"ΖªuUCRù(«lp`bnÊ `d+ íÌdX)H ú‡sµÞn5ýãó{q¬éñeÿ ‹÷DÎGãtЬ×P 9VÐ[e2—>¬t4ÞlMƒ¶õ%Žªƒé¦‰YÌ4UƒÉðK†\m_}¿U<(Vçõ$³Yíä§aC°Æ·#W®9%·IütP“}jÜíÀúÛþÝÊSn¦áT§±HÈÚ–ñX X¬UaÃk§› Ó,´±]Îפ;5œ²Öãí^X¡˜·ûA|‹ýWZÎI0Û€-xúýËꀸfôÊ÷ÿ°•¸e¸ÑùÂò@QZµlë¼ðEpf“Í¢×Ìí÷ÍwçŽÛô$(æôŒXß4G©‡5¬ Æ Õ[5'Qw›pèS#.KÑ;ª—ZŸE(0ÃÉWž6(Õ”õ¡Ì¡¢®ÚÔD~þÑÇÎñ÷T£kTØóMÊsÇKCµÐ#Iª|PˆK¤äÇE¨ÛD+]„S˜ç•øë«’´Xƒ à‹rˆ#)NÇ¡/¡Ì}k¬}k–3ìyŸ.³G×@®‘\¦™gì¬!ðÜ@ˆ3´–o¨/”š£üùKµ}››~®xý™5Ŷòoq ïsN™u}·fTéöù}€xÝyÔXM’V­ QÐfht^]SÁýŒééfft“a´éÌcF‹B%L™®NÛð> òåþîúé6 J¯ yèw>MŠy„Ê"ó[,꜠,TOïg*Ã6]F>Q;‹$Î ´†±;['ôO¿25$$ _bž9‚1xU—Pñv*Ãqèõ ¢ÁWU3‚6ˆ I–°Ÿ:ñÛj°WaÞ* ’f´ú`z±ºHŒ9<«ÞÅÃMõ à=€8ߣç5@€!Í žþÒ1èùE(³’˜0A¨†ùd;f(fÌ(ש´2@tòD¼ñ·)ö¡ž¾Ýqø7âŸÍÑS˜ñ1eP@›ÖÔ©XŸ‰5u5%i¤dãÌ &»,o&M!O-K%½U*C)«O²ZžY¥YõMð©vQUeNãJ!J*ÆOZû9ŠE5«sÊuò{U8·4#Ý$çÊãÈ÷(VEUÖ0ødØå˜½`ÛèöÙéþ³]z àºjá‡,SEãïÌϵ1k\Ã)ŠÝ¥ÝGw‚#‹Ð%ÂÍ|ÊÊÓSn5ÛlS%œ[~ÿ·ÓìןP~nÍÓ1.o"nÚ^$¡‹%¼Aâ~Xl…R‘¥^ׯî±QÜ=I:Ë€¿’á*‘‚˜~Ä\å¦ý‡^cên¿.âlÃË gk¯¦šÝ;½ÀlfdˆÊj°Ìg —ʆɪ߲¾×Í/Œ‰Ê’µ‰ư‹åÈEwu£ä&ˤ‰ õ ¦¼šG/Eµ)·ÒŒt™¥)§ ºK=!ØÖÛ…=2Ôø˜å§z@œÎsœ ·Ûê,dD”1âÈà_•ËEBV”£‚NñëŠvC»:6e, Šî5dÇC¡c·{»øh”;<$iT¶Üø» Š’áœÛJ9’¯‘ÚxÚ}eéj§ºÖ¬Û\”à‹ÇëÈü¸1£eNl½¥tÌØÊ:ðå¤ì –¥®ŒLM®ð.Z- ±'RPD0_&ºT4{2<"ñÀ2JVQ<:µòtÍÄŸ›S•ˆ³ƒ$ïáa¶nµ|$ä[uCËá³8 ˜{°†ÖŸMÍrÇCó ’8C\^oÕ—â„ÁòJÓåY‹·ÖœJ–óhçmGë,§RÿMLF=D©î¥î®ãÍt* ZSc&êQU-V¯Á(Ûþ=€wv.q@WãÓ¨ê/•Å_4·{'ç!ï ó­‘¶D™Yq]›VÃàd¢*+Tu’*gZº@hakd†h™l6«/ ×ë"^£³Ï»¼ë¥~Jé^ÖŸg¤¯Ž¬h,ꦜ”W¹ÒV1w–½‚e(T¨³ºÙ£þ—k÷|èÆ•1ê`— nµ~q•¾qµ>qM½*imbK²ΪnÕÊOLtFO¾Ëç}±½8;;}ä1É>8s¾ŒžfŒÒâeמwß•ÈWT,ÄšíŠ,ÞîéhÔKLÕKŒZ赡ô1¶ëÂÉ)=Œu®NzÓYe͉cÿtÊ'¡MóŒüÒŸáÙõÙwÎË­„D~£qc¯Ã?qp£\µ`›UnËg%'©'„Ί¯ãVµ ‹È¡ƒ~‡ùõRþl€Y¼š÷«þ’"t~ÀÀ%À¤DËÅå2ýà¿§Wy—:G_W0§òõÆá]»žTxp³,c‚EOŒ>-zvÝÜ7Ò÷|¹&gÏpŸg¤‡Vk'juDX±Q¨Âse®_°Rímá½’jºYºŽÓ­áªÝ‹ÀÂunOÉ‘ç"™ñ5Íñf+îfÂ5H„åH1U‡+ž“ýÅk#òÂmuPÉ&a‰+ qÛ-GcØs캣u–°Ÿ» ä4~ˇ ™°c_3ï°3¬K\¸*üµtÓüÍLþz }—¤Žÿè”çZG±»’Pu® õí¹y¥‡t}ãñÇÞt'ü*Ó «÷*È>“Z.$kp&nÞov›Ôn¾Ðn!î¡æ46Þ_2D¸³‰wúvß„ŽO/⬵ÆÇr—é÷ÊTÙ1ëçD%†±VD“š&H ð\¥Ö ’Ë“WðÅׇô×8HŽùíiŠǼE‚è†n|?$K-oVF«h²pÔ"ÕçŽÜ 5¶ËÖÿ©SÆB{ M–Ÿ­bZwp|:žŸ0·Ñ«g†˜Ò— dŠ33â½Ä½czÃtpÛw4W˜:sõª|š~J|oœÃ-ÔëPìßï!{áƒÿâz0wsý~Þlë-ž4²/ÉP;¸Íê¬e[ÌêÀ÷’WÕ¨ Õ¡má*ÕƒŠâÓÏ¢CsâÚÑ')c׬ƒ6ñ~#'„W²>€3pZ8©Vª±”–SÀÉœQØúé°®ŒùðÒ1`Ä µ„Üzno©'l7´‡v;ثſAÁ¨ƒêüÚ­ÉË39uf¾eʰºÄ Í¢7ç lb6ò)k;ý뚯¦è‰|H8r6ˆ¿%WuE0®ê`¼ú{6¡€‰6NJ>jß\žÐaHMÔ’VÝù±EA^PG¨@êœÅƒ^,°Æ]û _.~$Gž•xñˆ6MÓ-×c$'$²vZ@6RO÷D\—t¸Œ´¦BI{…*ÉÖ\¬üüRx;1¦§šüKÃW0à¿kÿ]Nõ Íð2‹ô˜ØGë^yU"t¥û¡Hàëó1sË>Ü!›­÷d¡ëé~=u9¶ |˜.¥ûË¿z«ë¶%ätš¡ññtdóý­áªQ¡34u?€¼%N¼1oÍ#ð5[ o³V°EM=-׈†E¦IqÆv’P¥ÒX[çЪrjE¥IjV5ÛúfU­‹1Xg ÕÝBehÚ+ÜøB\i‚Sv¨ŠZÕ).‰ÓH6n#Û¸™e± bc^äçôÔ#úJI=ä¯bEtͱ Ñk”T{¸¤+=®T M£œ‹·¬ê+¬­.R!a¼š ¿JW<˜\u£Î—×Ììsξ2N;™b¥ÝêŒÖjÕæè“.v®LM.$$j\æ0ÐÇ÷OÏ-ökÏ \üãÑøy€l”m|Gi}úFf·Zr¡ ±¼h<¡l°íÍ€jžFVL<¸ ¦’bmmÐÆšj¡I,×i2l?·*Ž@eS­G’úŽÑ±±¾Ö^€ Ñf†j$xàèê4WðÛ»èkü¢º–ã¿Î‚kiU$€Ó7Ùžïi¾YÖø~#‡2«ý^ðm;Gi¨x\ƒ»/ÊçÜþCcƒìË3tœßûl(NEôuÌ^W[J¿_\]¡–|?scø¢ä¿õ)0„ÛŒ1Ð+§9¸ æfk¦^]j]î³Îå\X'JCt`Q9¥Ó#M\´r¡$©“Ÿ¹VÝ_~åʱÀ†=%æ&ŽY:1ÐN–‰‰fÜN¶Š9Š)NHåðTmA£|~~_ÈmÀ ô Æ]ª¾Ãð?ç òùA2>YrÆÚÖõcÚvYTõÊ,(J)/é,‰ËÒÜðƒsÒ¯õ´ÈY¾—ó]~!"F†¹ Òµf^A'*ó’Æ¿d¡éùžþÇ×.§,ÓÃpÊWà˜»Éæ×¨î’©7’æºtæpç)å)äûÑò uw­ËŽíŒÞ c6ݬ~œ£¹‹¤¥ÛmŒ@ðz:›Ñ)B’C˜®¹-¯èª ’ß²;㺫æ ·–UØr¥fw{«»ê}3]z’HË•ñãþ%/JK;ObÖ™q'ÂnÍ 6yi¬Wé0ÚÛÀ 2—mòÞý¥8pIûƒ|~,GÉŠÏ·‹¿—èBRvÎS„? ßn®ÿÈ… ¡eYpùEYB]s¨V©Æ¡™V^G¯gð‘z»5ú5#w‘ËØ[é¾ò¡ô€sÆuˆp‘÷Í—ãÊõú@2vëØžÍ«Pâp)œS·Ÿy·yºMP!ŒF Û&u§ˆÉŽ >“zPÓm˜Y°0ÎêCÅr'gï!χ|‹CÛ½Ô²%¢=çà1|PDZËÐGŠÿUSUƒU£UÓíXeLõSÃÈ–hl¡|š‘xåñ-7L”j²eÙ˜`MÂ5u óÈû¥€¤¶Ïá}FÁ—XDƒ"—€ ÚZû›.õlâÂÝ%¤·Ü]@àæýH¾¥oVA_pyæ–seEÛ¾kgm¸(î÷hø0j6 Î@P¢‚iꇺ/^šàåøA䓊û9ù™³~†çA·örÎwvØ’V{îl42†àMú{úVç÷U.- HÖnúfüfÚ#á“V™Ú Ãb(†vDê¢m!G.þ}ƒw`±ÏHЕ„Mh|§${ìBŒ+ûv0–]/Rnnì£=a5zî°Œ½E“Ü5B¢f²&Qí&W­º°ÍÓåsò(Ö‹ËlKl]ƒŒ£~‚áØ}h€£<‰M+Ìå¼bIddð ÿ•üꧦ½…vìű8m(‰]ÐoóÄ"­®ßÐ’ík‹‹ùúGô¶0¥*w(KÞêÍÿUïÅ73Ã4¢âx\€ãcŠ˜jË7,¤È!&”êv oñÄvy–‹þB \ Øæ×«Âšã|6/È8§„áX¾q0%€Š¯kbÙÌ­fw§—ˆ“Xûù ‡Hu¸èåÕ£K§ìªöŒ°eÑ‘TÏ:רÈÎ.DN„‰sháJ”箳¬cEðYDŽn7uë`TZÝ=‹W~§¬¦òÉÁm­aô&ɾ®Ó*/•×ëŸá ÐymÈBåm®ÏUÆÂµÖ\´é’8SÑÑ”êTW7þÁ(ÁT¢5ÄTbõˆ¨ zÓ0,<`ðúä ìÀøhª iÀ5ð™>ñÛéá1'0-»C8Aëè²K5VbÖWÏÛÑQ%§ŽŽŠÎU…S¥lÎc¡IóÖŠ•D6¦Ò‡š…TܤäR²ðc†}Õ&òB ü„Ö¸ÅÓœ&±+Š˜ …)Xwt˜É„'ü‚€_R}±OŸ%&”Tâ(£QTŠRÞ~‡ãǦš_ß-—Q•r}éq«·™¯~úÃ볨Ó] \Ž×Eª§SIe aÆHkÈe&j]žVî—§ÒÒïM\ª÷»–·Èa.K[5e2f˜V•<í9‡2ü j:Ê%sho‰Öæ­®Šê‰³¨©Å&`IÀ=ËôlÍÚKÕß§•YΟöÇ¥x©ýi,Š+WY0'I÷³Z@ZnT›D÷äÁG²„€? oa¥+!J0s|} c}£î³IG4…V.ÃÄú:6©Öì",_’’„‘GJ31wQÈBEû:QêiYím7 ÇÓÚ­úsÙY™pï•ë‡rB¸×‹ÓÎVŸºÒ”ÏSîYÞ-nî©=X ‰[¬nÓÑdÆIT«Ýº÷] \o‰Â2€ÃwSW_ûíi‹ VäÏN·ÓÇ_3ÈâǢA†kz„†CÌlÑÓΤ_„_ŽšõIêõñ]¯þWÁYX¯Þ½eÙ†âþS%ÖîEïÌÜðʤ?ˆP¥ÿ<4W`-ÿ¶93}mûNìòî¦()#S ‰ $TSÜxDÖcJˆkcçè°èœûEEš£Ð_¯Ñ W;õÛ“áÊ@«W˜<«º+¬ŸÑþçª ³YuÓawWŠÞž®‡ ¼¾(¢À}šlñ#;M¯A)±X¤hJ17œiCÏc¹È½VîROí/ö9›¡¡ÊœwV6-›p¢UT5ËìÖÆ`u´uhªFÈÖ±—ôeHÞ…±÷é–nÑyzRþ!sÌnF$©ÐuD_©G"‹ê“7³Éú°lf±…¢]í@îu@(å(ÿ4… \ô5–[÷­IÒngþ‰ÚŒbè ª©%4Ø•ŽIûïD‰d,©Çðåuñ¾éÞælŸýôá^n¼„Ì€é~»«ÁéwïnÓ9òͺÒíkr‘ͱ^{S¿yßîlÙ#ÔŽ©ñy`U` ¯EÊ‹€>K1زé#eZÁËT›ŠxZ+ˆFhÄFpôÌú³÷ab5¨5—/ƒP¦hÎ8úÇ-“ë÷‡½ìß8»ç™Ëë—ˆÖ(W¢dëÎá ÁôãU=€*”-¤«wÇù„Ò ½×"­-œ`::-;/g&BViRX1jY^xJß$6 ×¾ÛäÆk»l4/fä×±HQÆ6r)IȪî›0fH2Ñc𬣲†Í HjÚÆa{ºrMËÏ uz¿Bu€=„"¼–>¹1êeå @£ÉRÀ6¾|U‡ÀŠù6%PV­ˆGp®hs#Êg& 7iáÀ7ÆRó9þÆÊ-\¹‹Ì€mw+KmÊrÎ/'ÞD[²#—EÀ¡2# ìÀßWqR¼&êÓ«0VÂþf^eVÀ²W‚ÄfÔî ?U‡Pb>®ªlÒ¥1¬$^.>—ìÆtcýiÛ9ð@ЛÖøîC(+U{¥Ì@ºzG.‰ú,š³v>”)œ)´Ð*ê¨ãX G= ªyút¯'¾'ÛT"ø{«·¾ÛD$,}÷Û£û;ÿ^‚%à û¡|¨| Sú#ÏSYg !)Œ0;˜.2dåÙØ*R«Ô8ÓÊ\/ÞœÚ9Ý:Þ@î¡vÐp&:^/o_®c˜Àd²¸cõÍL[0_š3£F˜7D>í¼> ²Ÿ£É®ÇIkí!Ûü<@µáÊ ¾‡ÜžG#CZ¦hQJC!F°ËÃ$¨‹ÖPÈW±Èf-™P-‹B6o‚ßb”Uõ_8“¤- Ô-raÝ Ùß½Á×OHçºßEi&EœðC? —£ê;ã­‚?_Š +iO£¢Žº z(…¡Æ™ÁáŠáDÐk 4Dä:'AÅuÞ·éÔuì:6ÔvA-Í:\sP§¦‡‰¦©ŽÑÇŽƒŠkÑY.Ñ(P‹Š…Sûþå›zQ×ðFòÜsÄ•ƒ$G‰šBklºÃŽŠ©(G(ÆÓ'x úº ¦¨«1RHØgV—tm—‘É`´x¬úlЬèw érÉuÈ=™EpÐ4±ØJkhWñ$î/!t(1ˆxåfþ‹äŽ“ ý+àì‘S«+¶$.ÀÔÏ Ÿ\8+N æà•F-®Iˆ;OÈv† w2NˆÍËm1!C¸À¬,•£°¸(¢|Ÿ„_)^åýÀÓÀ‚v/f4÷ýH4›Ö‚£½G+—ëV†Ÿ:ܯ¾-‹Ø;ã>ö½ÉRÚÅ:øÞåÎ{Xâ²ÂeE4Àñnïg}àU#QHÎ/¸úö’6Hµ gƒ‡núä¢.  Z§›ý% £±úÖùʼnęÄéåêèò®”,elÔ!¨†´}´Uörclë²{ó ò‘2D‚4ò´*ž¹•GCGO%_§†NE ´KSáày¿å¸RøfplTE¨gýÅ1ôEƒ&„Åa2`ÌüÛßIÆËÔÌç¹ï¤Gj;Šƒ{˜6‡“ ,…ÿPÈ=Ð)(z—z׉L’cAbaœ)F›ò*‹¯æ>eæ© ›Ùè6 )¦M˜©ò¾û‘î³Gêè/4-¤6¢ÖÍß~ÑÜüñ‰äÍ ,¯¬ÌL†RNâ,~{'2t}Ô>ìþ˜¤©²íuÔÿÜU^Ç‘¢Ód|¾Þåbå‹ã9$ƒyÝeí½˜¯öÔÈ`›£ÊÝqºb!¡Þ,D8í»Bq+$.š•’L‘Ǩ@¬Üê ¦MvOLg£dˆ×a|±`H¼Q€"á£Aüˆ9Q°˜Z®„"ò, Ž—ÖH¶L®åòÉæÓ}»,N|>¾øú~¿Cœè#¢ÇJdGú@œ—ȉg)YÊ×ÐãœÍJ¼ŽJøzª»ˆ€*j(f§ã”1y?ˆêÆŸžæ ³†Þà¸Æþ\£¹¦ñ –M»¿U¸U¸Qº[º>yË|K3w'©3Ôôw5é¶<èû¼…ç=W‹i²d€½{ê;7û> "ËrŸ6‹DDmB´§¸Èuè~î¿EÔfœõfJçSãk+µv™ ÙWäm€/±û¶1‚i`šÞ”«Ñ•ïº%ä$YúA”U³ºc7­Éÿ©O»ª•S6@¥,¶YÅœ-ÚÅ7³›Çù­û¨v=—tçxg9çpÖŒ‰Ž7‰S‰s‰0îØfÙ†Y©­4\èùà Ú"vBDÆ-8‰?ŸŒ[˜?CùMo~ôÎÁÌ‘ÂÕçLËpü¦pº,Æã“Ìñt#ãÈQef,™ÉN’ÓóaÄïºø˜QLïpœ|j‘NöXK¼·ÚM1£\C‹ü÷°˜‘Ò=¶÷~)*Dhätö½’ÉT‚$vbÂxŽvcð¶k…2=ìåé”EÔ‘¢¤:b¬qºEculª óþ¥‘Ÿ‘C;%"”Y³i3šÅÜÙ¥@©ÑÓ“ÛµëT¥2–ÉzÙ•\=_8zéÀýò1Œ<¨–-.Û‹GAoœ°ï²PèTDØŽµŠƒšC}©Xª¶˜C†Qe¾c„ÓRY];<t–•&(ן—2;º(”Pöõ%ð¸Z,{¡Àˆ|3hOþ*ÊŒš·vg’|N€¹¼vôïN#.PÔ²eáeÝ aµ(—šy"y"¬¼af¼’v×!*“Š:DåD]Sdnê€|ãËÖ÷+ òÍû!6ϺûÌ­¹×žõÇfiXŽúé,…­zÓeëõœRÄlxþýåîÎ’«ã/‚+¯ýóß+Î ”cÙéµvëm[Óâ^÷ßD¸Þ;‘o“è>‰·üШ0 -ÈîøŒÞ¿ç™»º}>Á auBuŒãé3ß¡*ø»7´_­àèzt_ mç.-W£éð˜”. ëöÑdõ2º¦Ý¹îm}åUùñÁ¤–cùøj’0õB)Vmþgç0T&Óôxeý†`Á£÷«BŠWäUKæô‡öT©«+š©iÓ€¹{¹Gc™@k˜‡RÀy<_:éÓ+Ê`TDwfÆâjb-útˆƒ#ìÄÕ‹…ŠøæÕ}‡Õ(²’6ùƒðoO8÷߬&IÅ&Ken“L ÐUÔ*5‡Va¿U#í£‡ÄþSš˜ ‚RPšV"kšµmÞ |±qgÔ|£~c©| éÙéšAú×ÛéPq8÷Z—á—­@®B9!çþ³¤žMÀØI8p.°ê7ÄûYÂTÑ#¶ŒÝ1>Ÿ^8 ÿÜ_µRè*ý'”Â)} ‚Øû IòÄÁõjY² ÍánÌàaöße ±ºï¿erÍHÑÈ"ÊæÒo³jÕ½uG0Ÿ*ûænO³ÿ^‹ ïÍNQµX„Ÿq T™`Ï\ ûÉE½oT¸éOÑ›óL†ŽH»aïŠÞUókÝacì§Ö ½ 3¶y“I3ƒÎYÿÍç ˜Db½3|2\Ý••‰üR¯VXe´ô¶üʸ#5M´MÔÉ‚]Æ |¡è½r—ºÖÏÙÁ¹¿ë‹ ÃQš¡ƒ ïÃÆ2‰T’%œÓ¯jHÇk#y‚’w;Òv®`Fæ³ £Y‘®c¿ò/ÊÉ &¢Ï}i}ùÎú%G}…¼‘@Ð~ƒûò›Û•">M4ÍÛÏË!¿¨|Vôò*0K<"ëQ97¾F ë¤WµQ¸’i¦ÿlL”óð8 ¥À¿%‡}aWÖ§|eRT6²Òú!lRó< ZôÛ5S¡Vv:ÜrMAYUÎå̰¡–±ivÕRCÔ,ff•QªzÌN=Ø [á Ä-÷éÍ¢=>ÒfµM´ v­Ô²RýÂ~Þ}&QñeNÕØz¿†ÿ KÊš#Ö"§¶²wà#Ö÷ƒÇh“]ê³~ºÚ¢ïÄή4ÍU^™ÍPæ…`¦¥ÎVšÁ—m‹Ä¼œ@³mnh†æE˜¹v^ñ\ÝS/u“g–<â™ ¼ñ\Ÿ¡úÕNòÍ1ÌêR¨Vœ–PV6K±9.IJ©U/ÏPU:ñѸû³>Õã¾M^3+ ‰-ݧ·„^ËF”£9Ö¨sÄú¹à¿Ï³ôs«*u”Ÿ’”‘eÊmZ‘bZB °=x&¼^[øÚü¾¹¸”}Àe‘eúɹSkxØ¥ýÈî’WTfñš)¼@‹˜þz@Òê§,YYZ¡›8(!'Â3ˆA9¤£žN›£ GUby‹gœz‹ëb ¿ú»`ÃáðËæam­_9t|€? ùŒ£ÁCYšÂ†,¡~Å=Ø(²„CÂbN¤hK¡œÕó`'…ÕÓá®dwŽH¥lÙl—ÉU-VW³¶N*j ‡ a#<îO‚Ó˜ÉÓ++?¢*ëŽjX+óAmŒÏàù%Í:¨V…—§f‡K(20*ßÉ@ÂÔíþØßž…ýÆc}Å9ìÑÕCòñª7±¢Íü\ÇuÌîýÀô¹ß²tkí±÷Y= Nõ9¾yT™Öíi4K-=Œ°¹A.a®£9jÉꪑ#ïÅ EóDé}Ы=´µ9µõï¬'ÝVìÖýÂT=À1ø2º½líiËóÝT­Ý¶¿Fµ;M5v8yUÇÕ]k謵Þ¯|mšé°·Ñ°ÁÝ©^‹râÏê¦z êlKf ZùoG:½âH30ag~»ðbz×,Öð-ÆhÚ€Gë/ÿëÎãÓùÅûx:µl#›N`p‘º[û~à U$>¬ÒR:~‡è™”]þÅr~¦|o´ÈCQ‰iY<á8ü–dMqfßöìñFƒç1””„t5À¯ÓFó^Ç@¦<åð=Ú¿'?SWx4pL‚]é Â"­xD 4i?¶~tæóëÌÐ $KPüJ©ó„]Ôò–}«#°"~¤,&¡€Â•T›öøM¢;dé^·{†·Í>cÓ— =j´r/%—jMÓsÅ’ãÖÐÊ¢®¬Áù8W©’ XckÍ¡øáòp]Z™ØÌ¶÷ò*3 2]©¯¤ctóå øf8Ÿ„*[ó÷uùÇSŒ6eþ#ù05_g¨MÆV½^Û«Û$ã—c°Þ)Ó.ôãQ¥Z´›£_Z…Â`Fª\ÿ ¦YÔÃÑàN¿*¬°QHŠsCŸþˆD"+ "â†ëÌ? Q®v~V"C·ÊË埒jH<á!ød!}¾‡ŒÔBg‰QCåáš¿”è™a‘R KjýÑ”¨ióÑÍ[sC(W°Y÷ˆ`ÇŒ ƒx•0'ŒÌY€èüû²nJU!Mböôh8;YI)å¢Ô2|ÏÀ¿qحꩱ]ôUšÄ'åí2v›gÝ~àgÛõÅ×÷Ö‚–î8U}@be«£´—øÉfŒ12çü}z¿Ž·÷{ܱwÊçFãt\J¥éçµó>µÓÒ ÞÑ+rG:ýÛ˜öuø=‚-Å:À1ïhgaÒöðY‰Ú,}1 ¦ÙÞNî詌Ôp¦'š¦u€{òwï·Ÿwoþ>¿^ƒé£fRwNga”UpÛÔhEã ‹á£Rº‚'PFþôµ#DÈ^§b¾Þ«zGùÌ,^èpÆÐžHŽj.íˆÛ·£‡‹ KÚüiÒù Ý×£·«¿+x»Çø.xþ ZÒµ­+'ŸR¦ƒË÷‚yKñ©$¢»^*ï[ås‹…Ù`ŒfZO_º}dO‘u!¼Œ‰c(>!Õ-@µh ižXDjÁe¨b.ç–,ÌÀG—\Ò²·¼2s·DÁ>SH// ü×aæ)cÆ)JnÌŒŒê žÑÍ­¤?¡Œo]^®6=“ÿ Y;êAZ¦ˆñM‡‰a¤å™rÁˆý‹¢6­².[·<7¯°æêýjW“Ù^Pº­™AUÚHUV:^;-Y4H~AA µ‰æLÛÊ7–=‘a«^-l(kÖ Ø—ªå Q†ë(ùHeèU»²óQuè+–NÞV“¥fœ½õoÛ-³ bZêñ6V4hÕ€cÜ ”Èßàqí5Û÷ï§³ù$ñËS2G3F¬f!8^N?îU=Å´“˺•ãrH¿„  øÉòñ°¯¹6áw¦Š{Eß’zª¹*G­CÍ$ˆRȸԺEµrÛ@F‹*U°lq›%³ºuT°¶¸ÒN™ÍüÔüŒó¬s[ç2lμ÷Òw¾‰î™ÝSú§þôª¶® ï©;ßWÖIÜkÜcÝeÙgÚ‰û¹ó‰èFlÖé6Ñ»°m­¦*ß¶QÙñb"‡OÏшa&Œdͺ´–¼³¥t>&¼úø•™·¬}¤›«¸BAÍœ*[cî-y,È®S¥ÍÂísÝr ÌÈ%Q*;Ð[bþ=DkÄ0f‹„]§6ZT+ÁœÇ–­‡R¼H•Âà|)+¤*p›g¾‹wa±p¤»U¾KÎKü ªF°Kçˆ`‰Žr©è’è2¯RãÅ5Æj\¦¶>ôѪQÙÜ8@ªR¢»:‘=ì%EÜI{È|ô!*ìB¢  Ükܸ<ñ-Oƒnñei¶M‡iâ‹dægvIÜ´Æ@°ŒÎm² RCä³ÌÆc¾ÆñWÒÔl‚¤IBZ%o ’žx….¨Šb9êO6V£I³PÙj¦ÄjH”)ä‰ 3Ïñ¾Ã-üÖ@ØÈô;š-µ¾ç7T¨JeßXD`=‹+u§a è”ñuW\ÜÌQr”~Æl9Ý9ÜìuŽëOLU!å”ÅrßîûÖ‹K’YääÝóá3šå$à±PÝîã¥a'<÷®—‚±§çÛálèVŒìl'¬h̰Î\oágQ_€F½Ô”}Ah¸|/‚Ï®}äQVŒÅA3ëæ½r}åd­ÐÊ­‰ÁÓ¦ºéÔÛÈÚê!æD˜ÓS˜¥°wÂ;ÉÄmÀöëgÛçÒkæM£×ÐøÓùúâÑá6þSgÇïj¥Ë«º€ÑÜ…<ª,…â´ä³(~ðò?‹u*¸ gsœéŸ=P9Xæ†!÷̪8 —#kigªÕ³XéP„ ÞSà"xôií=Ÿ˜æ6ÏTs[ŽhêŠ_e¦Ž°'æŸ<öi•hf±éÉüŽmìf™h"³a<æƒ&Žw:©‡<§ÒŽN.ñæ¼Ðΰ=ÎÀh²¸õž¦”à˜â€ûŠF ZÅ!ÂCœCåbïaÓYï2§Åg¢êa ÌõÐûsÞq禀©&ì«ùÿvŸªÅ>¸¤QÃîÂD]ÔÏ©bïðS?‚ÊŰçЦ‹²½r=°.NZLr­ïj}S³C‰4Ô¤\}Ü0Ûú²ùªyÊÑ•w«Û5&&°<¿9ž ûåý£ƒgb’?Lj[áü(ÜÆR§ó žïèìëܯœ‹—!‚Û‡ä-nÐx¸¡Å Ô¡k Y·Ò·BÀ‚ÇBˆqéY~Õm­ë4´o̘à° 7ªxN1h4p  ºyùÜg¸`˜ª1ÇlM³=4¤·ødÒaIñ$-*êRí7{VO£°P‰ÜSo#:•f„Ãc)A¥M¯õé¸â,£&{ù‘ÀÆvа¹ß\ï)š ¼Rú¯¦x÷^çû\m*·sÇ™ âw„…Ûűyÿ–ýkðY€[ ;îµFÚÆ1 Œ8ÊY(€¢…›°›øìþ™~×3ô¯«ðËÊèˆ Ë +Ø­×®šàµy¶®KÌ¡â‘×UçÛ:dvž…ØèQŸš^+Œ!œ1¼ýîQŽÕ–mÏQÁRÄ•#Ë‘ãªðÖz¾–ë< ¬à.áN½òˆ¤Õì0ŒÃX‰F¶¼ªÄ–ê‡5J•´K¶ˆú¤¼baf B³œ\SŽ0–‚«Ú/Y?É) (*;¸l£¶QÉ˃›^“@9’=ÉÏoî¼rY©aˆ#ðˆ:†sê é0ÑÃÕu ŽNòü/Îý1\Ö¦YE‡msضmÛ¶mkÛ¶mÛ¶mÛÚï÷õZ§Wï^}ö>§ªòªˆ;#2#ŸÊˆÊüq»ùïòbãøÊé ÈÎ[«Q€6K¢y¶n¢H’ì£R^Ãì‘é3bFCü•Ÿæwý"4B¿öº2»ÊúÎŽ’ibYÙœYC ´É®že5<´>œò Þóé—iåG2o¼´f‚q4óýÈMò{3câË(š,¥>—ûD Å¿džR%Ñ$OßT¹¬ZÐr7-Ä2DR3›A е·}?Þž³U©IɦO}­FÙÓL ˜>6ü‡ƒv¤Ë”QdUZjŸE]zuJå 5’(Tßb´Ýï°§­…AUøO†²Ázó©ßl¸”w2ç”ðX›1]–¡)e°ey™@>8«* šÄ\~€4F¨ ¢a½$?5Á§Ë5^×$žqÊu£7‹“QÿpÏ\ŸHýøE³L’‚ð¥5׊K0¡“ôt-d_¹ûÞZenž3À£Oð¼¹Òzü±J­² ÄcÉ86ù¬P]¿º¡ ¦Å±e±²!»”¡º‹÷r“½zŠ›¦©åƒíÐí‘í“ìùlT»³û/ΗKÇh ÔeQçß븛kÆäVRM­áâ ïlÍ&Úœê±%æ3¬wÑGn?!go…õÆ…fÆe¦¢üÍþ[úÛº[{o‰æØ]3ß%ê 6Jé®qkJ¸;@§<ö|öÍÓS%™Ÿ˜Á±×;fîhc+¬R¥y¨Ò! E0¿‰ÓÉE–õ®65N51É?ÛïëEÒz±œÙ´ü4õ’Õ²µtnHª«¨ÕeUäµQê®›‹¦Ö5 G¦«Ÿ-_k¹1O¡nÁéw7ôpÊ•[ºŽ‚„ÿ¨î€4ww#©Q¢Tü•*¦¬ŠBQV}zQ§Vª¡#§[s´|ŠyP*ÎL‰ál®êT±x0´Ðü“ýg€¸{]úì E-PJòË7W•/æ”wÕjôEÙ‚öêO™è¬ZDÍwMb mŒ;ÂÇ–KT¹à±x[ŒJ‰á¡ºÀÙԃєy¤e x`í‚ütðD†8+YëX÷ß3ÁñÅ]a¬ú7Íô2J”t=R%“4H9®]ð'™‹7”jó#¶Äó Þ¶ø8ÉéU,+7¬÷kº^é¬ÔeºÊ>… Ú!-¥ã€wªhgô™(‹ñi?®ªqÎ1H€^!×kaqC–UL}ÙJ'$=w‚sŒK”#ˆì HUÔ€1–̰~™!âf¦0éõ¿ú¨û2H¡e…4\»èåñRÑPÿ2å¦yYÛ`4ŠÇhþ᥆GÕ; |,©Â”®7'¶6 å@cÔZû†5®¤Ñ*n19L´ŠÒ°!ó%dðb%Ìè_k0êKÅ=ºÔ§V¢µú8Ú=nåܓҕO!õKjÒ'pb¦¼<’‘åß9+àñ %k¬€<=ìŸ%4Èîö”GQKFK©Yóµ|´®)2Aµë6“•¸y”s¾¾·£ér«ãÕ kŒ‚wqmi·év ýÀŸŽ†Om  +|?N­eä2’6È/èk½x¿±Ä âi'»ÿ>ç¥E6²Jc‰I½®©Ÿq¶¤WtÈ’°£.Ð]N³LãQßdŸ:ëð/JÓž ë¡`‚Ež~²DŒlb¢Û&µJ¦­.8%3ÉöǹYoi¥5¤ ¯ÖPf±:¼ÙJÎß–Q *áWRÒå´Ë39ldvÞiÚn™æh÷Ùâä=<Üb/Ý8íËoª€¬feYW‡?¥×”Yµ—’±s “¦Ääf¸¬k$)›£SÜÃFi“¨—í'gô<ÆAÆcÞGS¢K"4WزÔIZö‡Ôâ=x÷ÌÌ%ç¤D™D‰ÒìaÊ¡o"½À˜ ~ ð±òä2š£zGCf5¡/dÁ52ýCt–nÈ©ô™§^N < ±i¦{ÓT83ÁÆ[\læ°Ž«—(?„jÖJÍ!Bð„ujî‹¢¡CôØïoòl¡wbÛsö¦ž«$# þîFd'îê4™0l”!+0šÊ§@qB© W{áÜz£v—¿¿嘣Ð HbÛ¢hŸÜ:³‚ýÄO{¦ \ °‡´§{–ù1ÁüùiÝj®_ÒFÚ×ÈÍʰH* æ{‚ž«ø Uµøëx*íztŽßHûúºø+ÏÐôSÔÌ8Àöy¡°Q>ûo•Ù @"\1Ê2­„r)oðJª®;øy´eë¶û>*ñ*Ÿàx Ÿw ·ÔMü×½ßQ/Y¹Tû ZòúóŽ£þº¶n¬Øžô\®Eé—´v³‰ßøQóÜÇ;ßR7çÅ. 𥟴2Ovã7ÖçT]%‹V50-*ŽW±djéÙÔRSÔ8¨Õh³_9.//%Õ˜O~sÅèéQ¯HÓEû¾R-œ¦ž¢1²¹а¾ñ>šž¨~Ó2v뉾QÙgiNÈ#ßùN’ð^3gSƒ%]ø7¶)?”a‘“ƒÅÅæØ~9Žæ„Ù¯‹±]uÿ9ÓmãYÅÖèÚ|{ÑÛ£ôÎ/5ŸùlÿÔõ8LßÄ÷‰s–”Ì«ROÛ‹X‚F¡öÞ‚I¯*/š1¥ Ô8þkóÌÆ—¹e º°d[/.âJ]”¶ôÛ‰?Ñý²ŠýBÙ¶]fN§Ù´³Õ²êÄ7±Ÿ3¶õzW¢¦»ë÷Ií71}HÝ ŠL\ŽÌX_&ÙjÄ[•L«jD-LOÊfÇqÎÁ-ñ=çÔÕgÄnâ9ì;r+¤Û‹þ´vkrvQ•ò}&ϤËg¥èuÍ‹r6H”Ö¯nc¸”/ÁÖëy°ìœDW*÷{ï“•òóvÍ¥ç›.4†‹5°úÁP’·`Úð`Û¸yxÏé#9Q+LìKÒ¨y –@qCBlGHéÑùÐä»N *©wR –‡4›–qrû3²‡µHé+ÒoùÛ\í]êGÅ7pͼc»1~Ûtõ°‹öqNõ|Rû›s·ëŠÿ»öDôÞƒÿþ»ýDú›sŽûƒû;ZÒ{ÿÉû‚n%É- ànTF|ôøÙU¸RXï…‹×™¡à±Ô÷æa½ú.ÃÝ>5ŽÍøz‹AÆM ÉÀäKù ¸;!*&|þd¹3´qåêˆÜì†ó«³”F÷X§ÀË”Í?Ít)pfÀgd Œsä8h|@¬¸á1ðwNÄ,RBXœàÏsccúÄõo0Çú¸_»ª ôo’T;»Ðkâaeªt[›¶îo)rJŒ~îÛärþù¥•jÛ„ÙÁnð}æú‰ ò­nzôk×7ïÏvt=¿&jxZVºW[åì¶ßfrŸ}¹¾4+}ò¯€èC2ýÂöSÔLr˜ν]ÕâSÙ™…ö+ðvf‡›sVŽ~É $š©ykhIAšRc:.uéáó®EoŠö“®æ“ñ'êôUµ£rR’ti‰¯Réƒd¦º5·ìí¾ûwt÷üA}‡$ƒSô#–šR(•ëÖF ÜèýU…Tj‚fæ:£ fˆ  9·­QÙ›x>%U”˜bú7µiÍP¥ŠÕwå¡L:౿%¯üR*f?ÑD¦ Û¨X„(ãgDO@o”©îK÷ W:¢­ÂòuT£ú°{ñò_¨€ˆRnq¤ß{.‚šƒc @fðJë¥Áå ‘ù [y¤#g.–^†kìN¿ÌçÁÆÜç9â¤VÛ3‘Æ uÜǦnŽŒ…SãQŒ-Æ™êJ¯ue4ÐT+ÂV† u8Pê@T2Ù%JÍ@çòE;óáFLE‡%Ñ1+‰ÉDO´}X{§¥0L;tá¨MÅœëµ7ª;ÆÄ¯ ¿¨Z;},lnþ²u4¹°Ç NlsPu×LLÝ—b›éï¬|TTø§ÊV[g§ñ$£K Ú`JĶ Ķû¶*¢;«ßò¥‹ËÀÁÔÁ*Z®úMIº³yz)éŒlˆëË•ð5…·$¤Ëc^U9—„j­mØXG„ø«èO°:³ë¸¨„ ‹]£j‚dìš1§ë—Ø &% Leòò°“iH I£­®¢n½¹»Bøî.=CP=ª)7b/M±š4íß¶î|é?ûlNžËØÞ$$?{x5ú7YψoŠã«'Á“X…$âѳ֯׆?¡ Æ®/+Æ]fR&®)©ÃŒR×Þ—-gÐ0´±ã6Ð%<$"\âœs:[¨[nñßòÔ̲҅N!y¨Î ôå Éy¤¯3Q¢Ë’D–`øFFõj1õ–¨ˆ Kä.Uª… ”¨ÆÔg²³5‹ÑÛ °÷u5J1yKÔÍpXGŒ,ƒ&ø¿xŒ?õq™&è§¢âՕ‡*$ 2ŽG¿ØMs“?˜H=*Òo|]*0å@Ö?ʲÿŒÛ§|IÉ!åëè`f5#5qÜ'f¼ vÖ é¥¾ Ô´'Të¾U[rMh¬äî–."Í1! 2i4HGÝ[2âÉš8[6hŠÚ‚–ez")9[$%ëV/—5mD1V6`d¯Téö>øk*Þ@8tïEÜÈ—KÓ-Uí«Zá-“Nu¨£¯ŽV;¸ (¡Ä&«R4ÞDLyã,ÿÙšu<¶\¶|ƺ¶ü¶|½˜zX 5é¬ÐZ=¿y²¦Ü°Q±U±YÖ-íð úäœu½uÿ@»‘}¾ÞÙ\h5qMÞ©öäw‹}Æ;¾ùäyéMêa{Lû€œFî ÛŠõÀ{åÝ÷böáwá÷QçºáM{÷=1zíxì‰ É>‘6;§…e›^¼¾…}J²Ç­€s) Õ)ÒÀQGÐK:YqW-®ª“© }ÖÄ,÷×_=ïzqŽgC$W1qñÈÇÅo®½7fÖ•5K±\1R¨tV/¡6îïn‰™RÓͼ‰áŽÕ«º÷\ÆŠ^¹|x®“¾Pu Å’#ãvˆÄS /ÌQoqÎ ãòÍ¿õ+E¡{ŠÅl–œ»t&„~iëÖí­,.¸Á£sw_ämæL’@7fåh½!'"`›[ãTØ-âŒz(dYM·‚—ú2îœÎu|@6+oX:ÚLtnÞj~ìÔsžþý“·Ãk‚¾§Çóh^ERЧâ]-|4¶Z̶(ôÄßWèŽs†ÃW £Ø`ü×8'oƆЃˆ:MpBÝè©î°tŽmH¬¹Û1¥— ½“%çÓáfy`ëɧëüÈì¸É°êÓF»®V5u2CÿmÀ•„þa˜×ËlŽk>ânú4¦Þæ´wÂæMŒ)ÜXpß/Sžë³–2R­© 阋hÊ‹ÍÇ¡™!¡ßZX ‹ó#úDKKº¿BÇò ²çG¯ç–à=V¼m`l6/44õ°<°½Ý\…wé(ók×ÐÍ«%p=u.ÄIm@ir'ä¤Ü{I¯V=\ðn  sÀ{ŠàId%lc€>½=P£äwA mx y àè.ø†;5Ñ™`ÈþTÐÓÀ$¥($ÔÁŒÿþÈ(¸A.†Îß À§hâ6 6”ûKáþº¼×dË/?½ÂJž*±{ì¶&øŽ"÷LômÇ÷æ‘èÎ"Õß íøÌƒ`—'n„ |èœ<1ÐùÄÀ½+¤Ä¨Ø¡øMïh#`¢+òª1ïÜˉëå5ï@H%ß³üDcÛ:Èëàw»ªÁuÝ_¶‘Ç|÷¸m²yû›dv{Ç©gºG[_úMf Ë6ò‹ ƒ¯…áè,úÀ„äC™õùÛYéütâ$õ¶Â¼úImE¹º¢£“džùSqzºÙ…i'Í£V·|dËç¹7kxÖ"çqi‹ìDäÔeÆË{çý ÚCÃ\óí¬À º õư)å •¸Z©h^roÖ׸!ߢâýJ³œ®_/fƒ:ú´TŽ:ÚdHÓ4 …2Áîý^ˆ±Š_Ö[94™<ø7¦r.ó­ ¦R!é=Æj®z œ±J¥Ê¸6Ñ÷BíW9ôÙžiŽÙu.*”hÖW ¨}îULÝ CŸÏx¡ÏÐѯåÀ*ˆfQU·æÛ„zåú&о!ý â Ú¶iž¦{gò$q}·më~ñ+ iù¶6`roPO/Ò„ã(™¤EWî&àñ±? {—<ô÷Ã'ymHM13Œ!#Ÿ(µ÷Í'÷ªÕ´,žÙ4Ø9Í|›Ù{tñÛÃzGÒ<ª‡[ÄAöL×ÇÉD^ÚäHÎÏ+á (rþšâ¿ ±»Ã¿™|7té¯ n¾Ãu¼¤˜›@Vÿ<à&‘óApudz,ðI Ù}byxäÖy¯ øy@0-Äö3ù<''%¢H–›ÌüEðŒ®¯Jô,fÙ–þ4ïxVX„×iý$W~¬Ú”SB-}rsÐFü~VÚL6××ZºlÄûüåÔÓ\ÒÒ¶iá»yÌz‘ÏÂ`Ѳàüi!Á:f²&ú¸Æ*á÷Ѵдa‘aü<ç‡öÇ(1øoñ X¨‹™…'i}ÃKDq@~W(܋à /ñ4ÉÜo:ŠØ¤ØÓNª2G0òH®S¼tÈlFUß;¿{ˆŠ±óhøœÇßÛIˆhsŰA6¨Š_Žm Ûj¼B:M¸;ð¤Ù¦·ë´#¬Š“ZæÙ²™›uß$Øel3§Ä*.Uj²} Ôk Ëߌy¢;%f*•…ŠÀXÛsB yL(k|t2€ÁŽÙºS÷æ\7zCi-†½'ªQJ_e%ª³Wôðm÷¥[±.J]2íVQ²OÁ ½açà-Ê€¤ªšÅJŒ«KöÙ„.,êÀ)H ÞÿÈ—…”ÐDsBò½ÈÍÄ—§ýîú:`Û„sËv½%Sé'Q‡)ðÁOYçÙ@lAô?M®½Í‚æšè#_˜ñN9í·ЊU<Êp™ì¼Á$¾ÞfBiâå MýnP{ÕWw©¦|¨â}Â.þdü’RܯP(÷+SgÎÆ2ç°Ê¥jž›ÇyªÆfyRM‹€g+ã‰~¼c½[;Û&ßÿzFtw¥K}ÌðU¤sË´Èä8Œ«_d—zXÛÄyM–.«ÕjÓ*á¦ê/|¥}½n˜H/ȉý̨ӵN-µ?.2ܳL“ä^¡æ~9¦r‹‚–µ«J†é)jžsÏ Ü/”úe.ý@yÄ$$íË5…,ɦ.#‘Zs\²l’ÅûOƒ¶ÓÜ¿Pì’´f¯41ãK5úý¢ðæ»3:ÿ&ùzÅÓ¾Ÿ²,ËÔ‹®š°z¾¿;ú¿ôF* àêÀ¬C@yþ]a—èujàVì)y¹™fLIºÐÃRñqümû+Cw-ô[%Œþ)LóI¦Eˇ.™´Ý—´ Î$ƒm³œ3¹‘÷ÛðÑ“ÆH¡GW|¼qÜ<‡‹–0 ü+yyº˜8bÆ«ËIw²øiÌ%ÙL’¬/Ÿ3ÉgƒsªUõÜбs×;q=¬ùs„¾aüaÓýÅÌ"†û Íá$Ç :À·L®^hQ¤ŒOá¯%GÒ‡«*_•QÅy‡é®eGi܆åþ׫JHLFâÛmõ{w6êÝí|ÒÒΑW›wkµ>¿¹ZÈ][}gúüú„ÎÀÕÎOÏÎïR_Í.vnu;é¤ ®å>`˜Á@ÃîáË“@ׂ5“°’šdÉÍ Zª“ÕBŠ Œó£Õµ‰îvc‹@{¾^F}£³½ÙÌbt$ÿfåç Ü’Ö‹fìÖJÕõ >.6¥”“°§¨²º×-ÏmPÿ áN)h^í.´Ç\³ÅøÝÄ9åµµÅÍÚöz—øÉö­ô僋ÕÆõÉ­ Z‡g€D§^ÚÜ¥`àDð0–"¿¹Tçì ïlm²_1‘BöÙöÊîú'çb™pêÜn{e¹1~è:p„1¶åÅÙ Fï£ÌÓ+"$YÇ&¨û{à`åç%&G lå¡á8[8=áxåyw@¿‡†ÍùªÚÇ·ÚOÂþ ÁP4Œ‘öæÚ" éÙÙûÌêúòŒµ½ÅŵÙJ'h e0¿Û(ÏÊHNÌK ,DöC“o®ÞâaއºêÖür{‹ðéÅ9îI\ÄúòúÖõU ,tcƒ ŒÆe± Òæ· œ€%‘|c¤&gÅ6_‡òÛ¿–Rj fšæ¹ÉíIOšaL^ÍO Ûæïó`ê*$sŸñ{ŒPŒ'Á<Ðó+œÍU°+"g öÕæÙÆ`ÈA5”¼žÝ8¥òüÊÎMu +̳Е¯bžLÑÉÃ5¨Ç[ZÐì´%®Ç É¹Ž žg^î$fX†úF^GØáÜ@øúÒüî*û’䧤’›Õ¬>z“SªòôQXÈΫ¾=FFKucs/ê eŽ ðÀa¹æ¦Ö§xDP÷€¨bLÌågG§D@g~lÍíÅ N(<ü³‹PžNPöÀÈŽîYz\êÎô`Ü¥ŠÎ5î\œ°"ŸÆîf§…¿©¹ÓåÈÊe§''X‘îü. ‘>žg1ÞÆÎ&+1j<ˆí5ÞYqÿF'ØG‡VÉìK5îwNQUùLO†³çf‹Mù´£HIÞe8›[)üK“í¸ž9&± øw}nÔŠÇ;èÖʼŒçŸhG‘ÇPØÆËT6f]Bæd4¶©¯á†vÁï¯b§SÚt¯aÇ •¼b#3Ç>„¶Ä sÎ2†³©\Fc‹ò޼1y­¢OØW¸ø4›n- ʸj ð &\ƒjA6eUx»íµÝyJj’ªóìê9õîKp­üÐügÚq˜~ãMý=(\&pI<, `åÉaoøÛ«8N ‹õq÷9ˆ o9‹±nÎRg}%zIÒJCFƾꕕÂEMp0N°ÒYH*}áqÌ™T  b#Á³àê~=1-tçx¹mMbo¢èÍ‘˜__N2ä[_ÍV”Tí”ò›šiG¹Òý±6Œ${Æs¥F7l‚³üçmàì#®¸åíÌ|)ÁŸüòPÐòÌ çó0sfë6uvF(Àuعپ<‹Ë³Ûx©9ºÓL0¼³+ÜÀþÝúfhR”p^ o$8Íà‘rn4‰¶qFÁ–=à ú º õʬKØÜêéã`Ö>»K®È~7Mp+qÔÉ Åp¹Ê2YˆåâÅ1W?ÑBzn¦âbTT}µT5èe•äõ‰*OxÑm>ܨ# ³Ø!ý/ŠL‰5 b8#à8¼U ÈZí‡Û‡H¿ë…ÜO‚OêO›z±µÂ+¥\›*ï–ú‡õ‘ 3lª³ÆèäYVUÞMõwk£ãÿ³ò›ŠM•Z嫬eñ&çõº9yà¬Ý@ûGa'7Ç”íz_)‡ƒOm‚æàĉ©oF÷40èWRmŽAÈ>VùC¶ ß/ZÄY{I¨Ì—ó4žûû…7'‰M =0î«ô Wd’ÇÌÏKÿf‹ÑgË8¢Ãè^xñÁpêšæ“õSë„"hï~%m[h÷ët­JƒQžE%9©[¥Öäùfð‡ìí‰ë’Ÿ[ŵd/í›ÍxtŽü» &HøeªÐäSרèÖoèáA²àÛ~JŸ6¤wO E¾~vÜ.‡KŸ95ÿð…6xÞ¡NΟòî¹!¶Wß5È &LZ¼¯¨Ÿ ÿEâEbìtP8 ÷Š9M—¾³ßÞœãäÍîë'† ”ÀWx’ý­þmþ [¿KЧËÝkÐ'‹;é ïû;Õ^Â+üÄ|_#îÖ§ÉëäÞ-à+ï?]©G§ÕWÑlä…W}i|}«ŸûݤÀ¡ gºþ7!©Ì kißÜ›2½OÔm&{¶)Úe5×Fýô6…×à85 e ×òÏ›ù'-BÓ6b8] ø'¢Ò^DÁíˆ[¶ÁšãuÀwÝ€Ži€ÖxQ¡!žøþžøyjÒ0üo€r_v`^ði C¶AžÒü5®õ«1ÃÀsã¬xc&.£k°òË:÷ý'ÕZ>E¥8ÿ´‰®³Ä×á>ÙŒFç¼¹âØ£ ÚT°5ØœÑÃPt"Hƒ¦qè³·ØTÑã²Oè°VÆ›Ã\é»ÖqµáBÑ2kéOÊ'b ¤6Õ‘”¡Ñ4·yf—È1Œ?§cF—Y,{®ã^Ù¦¨Ù®ÔG[CW„µ+³ѯz›È¼ãm—ÙöÞ´ÝËÖëyh{z(zˆ–Ç5Y0S¬R|&MðL(‡ni W->Ÿ‹:S×";cf—Y-PQrkúAwæà˜d‚/‚›ÂcÀ]´~¾`ÑÓÌaÑÈ£®ÊoÇ®ú[ÌÑÈ?®ò{5€r§dA¤÷­‚ôÞ‰è'²É诜'îR’íÆÝ™aÓ3¸[è9zﯙA€˜rcîÈA¸rÀsõ£õ1”ïJÐÏYõÍôNŽÝ®(,¼š™§}qÅ™ÌDV×Bvån1½שk©é‘†ö5¨SÓ질KuÔ°maŸG¿òN±„bEß„µwØëÑê¾Ò§Â•»Ð"À Ý÷ nÝ ÏRû:Ü[E¶§ wGºQ›VŽOÔ—DuäC8äb;µOVÈ¡aˆc^¤ñÒÏúï¯5ßi ׿„î*M)Y×›“7ƒÆmÃ!‡ W\zˆÂûôùã¾|/íˆ\sZ¢GŒ„„Æ¢a骛·¹ä‚ ‡QLûß»Ê/-cí`c]†{ð ë[/©"a÷Š]S•éP£ÀâƒÃúAbýà0{³˜¯!1OA$¸ëbt‚•øop~–uÇm(( Ó{Óé«£Ò+ƒÃ<ƒu¡®ÎS[~–_²þìåYÄ@ZÄìÃÏáˆCÆ.“ÂÀõЫjÁËÚV¹ò²#5{õ¿ª_óœÞ¦ÉZ¼Ö(xbóFb³?|¡¯3ÙòËzmEïñZ:€lÙÅ,ûÆÿ‚ ù!äôÆZ"ýX•…l†a—ýª»ÃG£Ö†ÕúšT¹cô|P–è}?¨ßSÜu›U¹£³»Å¢º5zQ 2º_«Û(äíD±†Û!†× ƒÆ9²µ=9FHTqº¿¨·¡daŒ?²GeœhŠãC-w‘K3] ^`Õ‡›îø .©Ä†ð—“ËʂþhzE¡àøü•1òþ}æN²#áþc‹âÁ|[GTÔ1 Xu8’ß+Ý7 PƒúÍ©!k²¡Šê}½˜¶\…Ù+q,*‹7Í’èø;äWUÀ˜À­ˆ–†;jå…!AVÆx¸†¡Ô¿FVO&šcˆ•E/~nÛùT·9fÓªãŠqU'õê“ ±±?ù2wë³ðúùþ™Sôj­óÔ.•úr4Ú^1¤P3yFÄá9%0yÖÓ•ÙIñ›zÕJYg_ÙÕõÕÜAíWÜŸó¥p#†æ?ãiÎÆF³Kïz¤Q-@‹¥ÏNØ[ÓC§üjœª{{5Víâ@®F02ÓóeÜU*äau§CôÉ‘OR%5ÐEOíAHbÀ@S2Ôx—† â )M|`¯¾›…'e„ÿªã…Ö-íiÓ f#ân×ö–¶–£QËÅ#«ŠtØ‹h^gàóK°"NÂ5æ×‹áŽâ™ÖÆíš5º£'bµâm2uì úÕ–í°–ømÖ“†UW®)º¤ržS?ÉÉc®ké Uaš¦K4zâ6Òa‘F–ʹQÈo뜅Þâ4©âÇRïHŠuœñœÐ7Ʊ 7rѦpçn<” 7N–ég ¶™d™§4Á·—s­ÞOÍúó1ª6¼\½WöAØht+ÒlU5§¯šÃ%1Q¥`Zl#BF“k,`Þ» nâunI.ÅÕÜÊ-I:&á¨f½æÖ~ä ø¦È6Sô¥ðh¬ÜÖïx¬@ÜüI²¯ÏJŠâÀ9®Üé§ãþPàÊ€ÀGdg~€_`oÈdøB¿ {Øüà<õÞÔv­2SgoÅDÃPÙTjЉ:uÏnDZ:1UϦÆÑŠ@lL9ä$ûýØVÛ(c³þz€½À?`0CPõ‚[±¦Õ—ˆr¥ôÞÔ±K3‹Z(VHŠ][4r«áè*ÇEöàuK}½”žHï ´ [ôPÝx¾Sb^”_à<¢\ÜP&¶ÆNÎŽ&6ÿ’ì -¡ ˜Ùè þ%1±330üO˜ý?`nn:e{:Q;[ga'#G {g;Ç«²6&t‚ÿzQ)[ؘ8Éš¸)ÚÙØÊ+É(CщZ˜90CýÛVPÐÎ]‹†…•€†‰ž•€ƒƒƒ€žžU‡NÂÙÀÚÂHÀÖÌÚ„€ŠNÀÉÈÄÖ™€ƒŠî_óýK¡ad`…¢2°7±03wþ·#’³‰*;ýÿ˜@ÔÂÚ„‘€™õŸ°yyÿç:8þç:¤MlÍœÍ X˜Ùèþ±v6qüWˆÎ&Â&FvÆ&ÿøüÇ“qÏÒYš¶-¤GÝã‘ëoh,ë[Ó¤œ#2aàËpÆó÷OüÃâ¡qt=zŠ7Å© ÞÖósåßxmà†ñAòÉ{=OwÞŽß±¾;ò¹õñtD÷‡!ÑÙ£sücW;dlpuï õõLŸ5ºL9îjÖjâ[$wiËýN'ñâsóµŸút¾ú‚Qûjññ¤ÆGm‘³ä²ñîtöØœÞâ–{^v*„Ž÷:yÈó®v÷“æMéi#n1:³[RòìĤíÝ0ñÈ©sýêrvÓk¦ÒOXêô9?•¿¼ÀM÷Ài<¯:¦êD1ã©Mßû'u ­}áÊuŒîª¥¶«Ög¥Šm ûö!åÙrÍ"…íˆmgõÇk¥7ØÚq%ogÎŽeÑNÁ ù;ã¼vá©*ýgpÑnŸy>¶9í,lzNÏ‚çû/ýGðsQÏPæí5ļtNo€à‰«J¡@ÏêÌ-MªŸ5ÜwŠêI«:ü³–ç¼U½SØ/\ºwQºw¦÷Hô´x× [°WŸ&åÈIã÷ewA¯&þ¼u ûû­ÚæÉˆVªwØÓæéRýCä²ߪþ‰«+,åŸ)çèÖÀ'¿k›È¾ žŸÔÎn>ƒvæéÊÜž §hoø<;ÎFs¸>@ñÄüòŸçcÛµ½ýêìb|ç=ÎwÍsÏjŸˆ¢õˆxÕ²ÈübšºgØžææíðõ|øÞzO˜e1ŠýzaL±aÿ›b˜Ý,í–ørA—;îÜÆîž>^ÙU\(/—W(fqºTôlcâᄲl¨£\ŸÙ`Ÿi-×!> ®ÍÕ›¤É­ßíIøÏwšöìxAO=CPp"&|¹ͶÀVîzþÚAXL"Jr­¡Z]âtZö åÐåÇÐ’¯1½£ä0•:?Ã鹊ª|‰õ”ö^Kï6Åý’%ÜÚtZTAÕÅùá=xëiÙM=¬Ø%þ‚›÷èçù™·!Ë.%^€ƒ¹ñÖ„yà N|ˇîâ³ø+&˜©ËþIG•LÖL,Z>‡M\7l^Ÿÿ%¿A†pË3ß¶ær$Bᣠ…/­Aº(%=t8áá·E º³>¥Åb; Œàœ§XŠ´{å×ÀûAõĉG›#DSß ž˜Ï6§ò¯Žø•ì>3xà')£â y2çÊà€7c˜«ÀèŽ6sqíæ&ÊÁý¥âéF³û œLwk/0"«‹RZq~Âç—8Ã¹Ìøju—íB׬¬6ÑD¹âç²;05Ås¢öEž[a¶›Í_uHÒ¦bÌCõÿ–;ÓuÆøÊîÁ7ˆjf‡½í³;TBbö0¼öVèâÙ‰µœWl‚¹ö×D‡Û£4…í õ‰ð±{è•ËïÌ¿mÕ ­á1Ç÷„ïŽõÕgf¬\+•gÌ[ÉsÝ;Éæ_<æ† 0Ž˜!˜e›/¥ïuÎ  º·K£Oì7”åá`IF×CØÅó£øÁ{($Ø›b¡?%1g‘ï†Õ‰Œ)HͿβgGzçëJ¸Û{¢{¨kÄ~v.²gʪKn>6¨O¤Ú?ñÔPžÇ2,>`ìÿÓêÚqžÞ»O•Ž·}kòŒoR!Q/$Ûãua§š ŽÃ´R¤©ùG,Á1_o÷¹r9Á?xÀ·q3b¹•ˆª,¢TT'É©É)(è¨G§Ç'Æb ŒLçùMú–÷x­“éWu›Uhãæs—MXÍ’åq“EH1&ôeϫͅåÕ…Ú~ûÒ ¯­›U+ x'QÕ(ѹ‘˜ S°§¢ Sg̤+÷ÇeÜLAµÏ…v+oï]¦È}­Ko™ïóPtùw6(+WÏ‚À=++*ƒ±ª=®M¶R«3+Xg¶½’×µ1LLs¾ÅPü¨ü…ª¿}¾ÖÕ}(úUÍÅŸ˜Ä« …p~2×eÛzBä&ÀÍ6Ù…“)lË+Ä^ÝÓ阅mƒ»¼_bÍçô…  ˜¸¡·¤oߘ£n;Ü5B×Õ.¥½6:‹ä›+Ò#sa[ —Çf6Lg[òÛ΃WÆ®fÕÕ™dëÙ\v{b¯´v:—$NLO€6éü:—3œ¦ú†ß’x¢F5V¶XëýqÜééÉô½i½)IM¶b¨?ï?5Ù™šÌ1Ù…æÃ¬·iƒˆ<8|A8Í_3#¤½nMŒë»[ª‡[v‚\jÖ\ú±/í,队.žõŸ‰MòV¦À ]P Î@*ô¦5¤Éè€B0޹ÜóŠÆá(Iª0®¯ÈÁ˜ß˜ÌY6 ‘ÎñT”[o–’|•‘æÞß³ý„òÂgOSšÂIüoìþú Ë•¨À^ ß™³sgäÈ* «Â2‚bQ™÷0M²/^ãÏÝÇ|Î1îÙh9Y€¬Ý>#ÞbŸŽEݨïy,åp‡ÄÛ?’/JZÆYNUŠOÇO;?ªò™?xæÑÇN(•þ6qFÖ·T:£hõ”r’.­Ö>­Òé-­ÞX£zhí²“‘-¡ÿsOŸ+«ï—1þ+ajAC~Üaaœ`¡ëø–¨X|ƒ“:.@eyr)êR Ò¦qþüÓ'zxO'A•1ÅŒ<ŽC;;:=>K(ר®\Òã0=“=Ááâ¢àzSõøèÂ¥åÖ=êÖôé‘íwj`Ï­è·ÔBJ̉Ç×@åÑtsbš&ŒN jc2÷õíë}øð­à}8Á"æbÓU_o£q±ò>ý©”WsNÏššÂ¸XL<šˆS1šwn´ÇPq4¨ì:§ƒ6ç=Ê.Ú$C>Xf”…8áÜÛ¾W „·¦gRf̳۫ ž¯<€è/"Yâo¦‘â@n\t¥uõJe½_Üát,E:™zÑC©WI-–|œxL|dìõ¬ëð&æ—Ûë+³ Ä'Úý†=Ëô>ó~?ï½H`¦ )v\eËNå¥hgp-Ú…ŽH[]ÜÂxNºv©ÜÊfòÔ ¥R­Ä{m‰¿N%«”¯­Lƒ• ÿ¥Xß_?ÇJÐN¤©hu¸Þ´•;²] ;Ñ ”Dçf’M³jGªdŲ‚Ò’ö­µ=KÛ4Ô5û£¸r1íRù²f¶”Áë\W#G6ÞQÃ$fF*Šaxp?mû4ÿƒ€SþŽ5뙫\Åvœe+ƒt…vf…üØ1Dç3,gUã#[n Ž¡!Èý9©3rò1Â9é)¦äÖªÏÍvä³0:õ[Tã“bÆI`XÀü´vÜGÛ°Ažu!Ã)¹lBŠG%$'Ô]õ*Ñ iSzÍu¹jÑÂdGìàœkKWpÊ6áÈø"3Yq¹±ô°c£ŽN¡ç1Îãìñ«œ#±â)Ôà7‚ANžLn˜œÁ¶c°MöGgåR¯©¨ö[ÖÍ+^!rA(ÌË&Æ?ë=qº"ädˉ Ê’¹¼Ób›ùþ´} +À€¹«©ªYRP, .  í=[ †¼³xnÂÑ´tdÐÅŘ0žÅ‚ã\²›õ>YìT"º¼§-WÝëÒ-ÝÆ›0«îðr&Æ“‘wgöSndzõšõ&”ÙXqEÙ}¿ø%yñbs1¹Ø·²uºi,%]%Û•gij‘ÂÀÉ&uð¢س|Ÿ—Ë|™=L¾Í¡ÒFb!3~þ9HµµµaÚìtµƒ \³!W·jR›~g‰d¤–ÎÊ=5׳)P¯.Ë´²¶²¢Ôêð ÿŒîß-é,v>Ð-¾²úÿ ÍŠÊ5-å²¼Ó+`Ë ÌV‰ÚÿÃîíºÊ=Χñ4;l‡¡—ù+vØ™âFåË÷Ët¸þõ›ö}¯ýÃw‚­,Z.¢˜ŸvËAsqôÞéívŒ¿\¡Š½Šâ†nGÙÙõVÑaGéµÄlª¸£é:6H&ô³”uÖõà&“IÌT)%ïkÛ’;I¹DZ®AbøüÒàñÈ[™|ضG0i¶_åàí¶”®yö\À¸^däÕݹ;ÝëÂýÓ·Ž‹«Ùô2aîävռ̨šQñTA»n€=й¢@âM?ŸFrjvªAßN#EJF† QOcÜ}!õÙS(¼_³ ÌY_"xzÅ“0Í€0KDØy˜e¸îRBI‰wñãûÙÖ/Îò~%=jô(ß^cW$r®éV¦m@ö'=eŒžŠ9u² éß@ËÅtBp…h «@ñê ñâjç_&sñøpáýs9&Ëþ÷%d›Çb°ýûOcÅÆqKBxA½9ââ•~þN)][_z’ä Z/ò¤iÓ%-L‹ûl[ MÕTŸo&›K¿ç ªÍcÓè~0I³ÖÛK?ãÛT¥Ú-÷ùr®\ÓAÚ¹ÝÍ-¸Ènnum$Zr9'ËhÖßdhºïÕ´Ù¤#{¾c×µiœ«‰I«Ñ?Þ¾j³®µnCrN&uÈÆy«zdF2 DéÂâïRÇ›’ã)E—‰7’žÔç&Ë”å*1&bi¥ˆaà…¦£ãG¼ULSžŽÈÖ kËù_öÊ·´RñÉQE–µ:&ˆ®£…%Q0írÏ=QYœd·8öxN’Y‡SbàŒ‹¹:ו)úÞ}`Ù!¸©i¿ŽÛG óøÀËyW\æ DåŠ(xEÒª^ ˜yÐCÔM{Áç«™n‘ ª“ 3d+~9ä@ôiµ ¨zÜ Ë“"¤Ñ3%û¿=ï3¥¯q <ÂÖ\Û‰½ºK9£VWžÕ#ƧÈ„!¿â g¬{‡Ãu7‹ò„¶±´~‚/OéYUÐî’Gªchƹ¤W9nè·×È6­<‡îâ®Þ‹¾K‚boVgŽÓ’×b¼zJáN"ª²5¼rÄ7RŸg¸8ðÇ@/o!„…íì1«óé(a XåªÇÈqFÍ ¬î¢-ÄÓOMT è&æL`Œë0ó›9§Ÿüg.â¡`<ìP”u.‡íŠ[û;w>ã^1on-j·.É[nf^$%|zŸÏÎ-(ȚыñQWx5žÙ»ŒŒŽØëê”iÒÿ6I#ôg#§ò¢¡uýx÷[6GV>c$ ÿ<êA¼¼µƒy¤hkB¬˜—AÉËø{¢´cðÉFûAÛÞ£‰‹pœhV¾;CêÆ‘ß¯›) «ˆ³ØxÞT{å°?êžGÃr¹6Ýž¢rV2–˜ÀGÃÆÁî7øi>mk{źZwþöüLæ._ØVmË4õ“V¸ñ­1xD¤yÑyê­GDv¨Ö’Y‘»ƒ’Þ  Ñ̹>ƽk2Uª”e:sæÆKeпô÷ØõHÕ¾®Ç Ö§­^:ÃvqªšµÒíòŽÞ—gÈfS‚;q’}»æŠ²3EB~'²^LQÿTr­ïïó¾w°Õó9 †ŸCFM¹ÞÕá*°p΄×;á~BÀš¹àŽ!¨ftƒfÝgú™´è¹b6i€>^†¼£6ÌvÍÞ³ÌÚ¯Eq-lÞÉ¥w¿!š¢àë'ÅJâ'â&á$tSV_5 Ä]VÖ=¤a,rfŽ7¿?~rrï]š×™f *Pzs lªœ¶süþÝ;A1â=zrÜÛ<{î0w–¬£~Ydµ&‚‹‰àdm bŠáãâ÷Æ©èÍk>fÒ•5)Žáäì€ô‘‡¿§ú½ýÂÉ&Òø“–@â ï:–Kt¥ æÀƒ#ºðÖ.Ôµaä¨5äqÖ‘ÓñÌ„u}½;Ö3Ú¯Q5Ó|ÍVÝ9dÔêUÒCðá0ÞLtYñ‡§‘'kziÿÛøárÚÒMúÇgdPÃ˃´×åáñQ¢ÐGÿü¢¡¸Þ#¹Ž)côPO„ À}ø+¬ï³Ÿ)‡ig(Ì„â$¯8TII{¸d„Í‚m½˜m}9åþM¯PHT©ñÞð±8¶¿©|°†‰K°#Žpo­|³°sÓ5õPá±ÐÿRŽå¬ÙU‰Q´Š{ü¦É’eo½l.ÚÓN¼„³O"û×ðùCEÂäá¤Ï;ó´-ìžQzg©LU¡ ÕJ‰§¼µŸc ×á2 ø²Ù80£žät 16­&¼Ñ³_® ¸EJŽ-NÀa]eÀ€úÓ´¼n|‘Û-öþS^€7Ðæ‹¤ÁIŸj{4÷ý|^¯çSøó’jг›å宲VÎ'Z¥¶wÌ2K˜²÷޽jÚÕÛsv°–Iãyÿ=Õð%y•Áã<â¥_âš'¤)]ùªÝXE} kÔ žb¹Äƒ’a 1ø,ëêhp é׈XÙ>ÿå¡è2hµTß“úº~Ná¹ß9d$¯ ¥,r±=Žo—tÛ·YB¤ oº4rÒ“”F;C{ÇO é+ìôŒâÝqúZ‘u¥/ÞãZ D°Ü5êð—[|”7bõ*Îö*^’_^.zÙý79²Fþìe ºÉÞf=Êz0‰»­à+*Žž ЪA"ŸLV]ÂÜÓÉÍS†u @®åËFÁØÙªXý²Q„¡”°AÿkPóâOgs¦[ÚÒ¡CqŠ }¨``å-Ì¡ÅzeJ¡‰΂éݬ³ÚLl9½ù×` ù çøVž7ƒ‹ ¥^ž…3+££‰Øß­Ý帕Ç5Öô…ŠIߣ„Ýkjl¬œA>dÈì¿~cu1˜g± ¹”ûO‚/Šÿ,œÌ}ÊèœíÎX>omr×\ñÅAcˆÆ»ôlî)Ðb½´uŸ€ì©ùß"»Þ¬•÷3F$“°ècñTÔO)p|tÂ¥g û¶0Ä1G²fr0÷’2fc÷pûQ. Á}Æ9 ¯HIµú˜ÀIS/ÇIy £Ð™µ×²(mXK…ÃjÉ[($S: Öh[EûK‘l”ˆëLŸ Ññ¡7äÚmP¿VÏaÙëEö¯#í²^Ýz¸Ûší,˜—K±o1;ߌ¸jºSV1¨§ -]oÒ"<¸ý¸Ÿ.ø €Dt#ûmÛŽ›ýºŸË5³§Œ­ –ë^™ƒü.„’"Ù„õÜI}ôÞLŽ¥*^“¶Ï¹á¼€àCráÄkÅ×ñ0 boToX¯%×qãj†7“;¿ý úȵ¦èîi%Ëáõr‰ÇÿIÉâ~O6ÕN~25á«®ÅLqY¹˜·æsƒwÖíGŠZá5N_}h?ž^¦OÚ…ºÙ8.šzAZè«Kƒ(kÞðçR’ã5òe$éäN›¬-íȬú±oÀê˜2ªeØ‹ç ßfiž˜ÚèPF`ñÚE&Ío›ÃÖ'³'½‹â:Å- B\Ãâ†SšÖIíp7-ýÒ^ UÕ@úr÷rv‘D—üI€—Rc iÉdq6Óÿ…Üp&˜|é¤Éé!sOÈ&¥V$/Ô’¨6ŒB¶Ö²¸œ™”v ¢ŽDúp=ϧÉTž V¦ñÞ`¡Kì“å/çË%À¯·‡IÊ ¿« ¶èâ‰üÉ[=|½„kçAŠ'.6Tí×h“ý’„3Ô=¥]8.Ž+fJz2èÂa0h³`äO,_™g¸G¯ñÞÉ ºd—Ët äÃŽãí ™ <Ÿ©¶~J6/j5À˜:ékûp¶ R ¬å4hQÄ•|ë%(IÖ™-„f̾GPf ûþw ¢oEÊø˜Cxøµ½¨„;B½º VäD LùVUÛC±~ \¡’·ö N¥„qOâBö|¼¹¶ç›sdò?£(M˜1y ÉÃER5Ìñâr¬Š…+:åŒÄj‚ÄDñ9êæ©*Lå0òñÃE­ªiâÆÄ£Ê}ìˆ,yÊ*²B—='ÜW UØK ‘%¯ÄSøƒŽ•{¹žÍ¬Ù塞µŸûwºñãüs9xŸIVð¹bž•ôzÞùr|ï˜JXÓ•Ôæ3ë&úÓç'pŠ[ˆ!ò œ8t‡›Ì9Æ„=ËM1ôâyÓŠ?ØÎŽ?^Q{¡{rëŸˆŠ·fLëÔÆlMor‹ÑÈž_¯S·oœß©Mi¦±ue1š$xú„Tzþnž>¶ùVAŠ4™˜ša’›*U*†v½Éʆr›ZT¨Q:Šé‚IlkªYuWî·_µ§NmüŒ;\†ïܦ¨åd#Ìd9³|˜ÉsO”!¯*Y\\.­J¬ž†^¦l9“V‘R ©cÛZY6‡U¸†¬ϋ̖RØ/ïKoI>æ¾ •é ir8®Ð8´jdÊÆŽÅQšh„1Ã2c©Å ]*àt_tßtMdÖ”úÕ Ç~@“î îÞÌê#!üfµ³žbïÞ(óZ/ĦA›t±çÏH#GznbÍá-n*ÈÞpFjgb÷6Ž}¤ÑRwDÖl°é¦7gûÀvŠÎT¸G ¢Å÷â—ZrbÕæÑò®q¹Íé™Ìø¼+Þ•ç?Ô;€Úv¤6ô>]Kt¿ty4Ô1ê¿ÊT:Ûë•;¥/ïìÊqßyíL· §Ûÿ8/‰Ý€iÝS7bEÓú&³ôÑ|ÏçCR¿Ó¶¶Òù”ŠI÷¸mÞ5ï¦þxÿ÷Öd׫‡ø!{J;ˆwµl ñó`Åüg´hé+Ås¨q®Wvçíc{ÿ ñç×>?GZ²Ì­|uÞ¸†G{}BÿÜa¥°#¾KŠT©bQ«©V=‹a“¦Œhî`ZfŽ ¤WPÿ_> éûÏ…o^#[ÚÛ×ô/ßÜý=DuÓ Ä}í2ú3†„N0Pʼn„D¸°éñ$ Ô…¤†§¬{vÈöÔötÀWÌ×Ô#06¿t¢pÙ)0b3b=É"$/ÝÄ+Ô/[öÍV럮Ä*)té¾Í‚v뤔"¯€Êá– °y qkqÞšè?H€ÓiÉ6t¿:ËáU7ÞÖéÚj´t¥·µ]ìûÙdM‡Á¦?ø³*Tt‹L.íÛ×si4øJLÕQ—P|Τ5sõ¤ž§%脌«FLçÒPŸ jÖå3Œâ}¾‘l8ÁXïeï ÓîÉArE”0ÿàIÈ,š_õbGÇb¦#¿³o'áAŸ–¢&)ƒ„P>²¥³¾«`G­ÙQ33( }÷}¾ç‚óÎþrÞ¹ôt1ËgôR˜(õó.òö2÷ÝŠEeÙ:$üüO啨í#Œwu'`ðKj¡ ^j9ËÙ;zøø™å›”.–LÃÕIËÇÅE·›š®GÐE©H"2ô*<”À + ÂRØòY†ûL’…Y^ä-@ÞD‹U æÄ/ ¥gGI>öšFûe_ñDd$!q0ÚÃXº¨B-C39F5955¦¯lÆ“j jñg5:—ê\Q§ßK@’øë“—«—%wšÌ¡æ‘ Ø8 J”l?UàZ%†Š8êË@ᛀ0Mvc—¶á±ØÓÐ"Mi%V·dAm`ƒ£†g€]T.çm-sŒÐ5’¢WMœ&“§©$2ÝEæ=,R]ºDÁô/j¬¶ýöãv[]¹ž?­ÔBr*mHÔ=8Å{‰ƒxó‡7Q;N`Â2L¤þH,{Ô×ýì¨ôC_ü›ž‚ïW¥Å( ¬ž? ú¤p•ÄÚãûÐ&ÌARè}“H`ߌ–¯MÀ½ïO1¦Þ ô¢J!¡j»%ß’èõcjQ“õ^—‰åé”ÍÛoSp+êžþ—G‡AÞ=BUYó^7[œ¡“rõœF3´ëX3TEÝÅ_íjÕ1M6 I]KÙ Ÿ›¼Ô£}÷@¥‹·$º[hÃÈ•!¯Iyy‹æäþœšdª gÍòñÉ w<ÂʲŒ?E‹‰ö¢¤©¾§z†2slÐ-”Þ\ú5­¯<èÞáŸÝeHbã×x|§–³úGE þñ‹^q–H;"j^¢ 5tN_NìèÞaîá‡/mžK:%¨ªX©hÊf`A€-å±`Iƒ‰^ÀQ)5ÐÛ½˜íû³f…ø€É¥¸G#ÊÊÉ¡×Fé Lóî'´:g[›ªÌ9ÑxP¡™i¡›°òÖŸ§W…ÆYÝuÄX•Lœof5žÛGò˜øNLO–ÊFŒn”}{l^†jD û<пÅ>ôâa޶é>ãù’äUGiIøII¾BZçp¤´Hú9Tý5šÀ>…ӄм‹!¤ú®v†äþÖÁð„ê¡pHšx «dIé·õ[C×A=ÛÛý­Ã²‰C‰¬õ”`B”¼\¢*ñèÔùn‚0*Z0Ž 2Î0D4U()Q æè;G4€T´j‰¨C!±¨A5ÀÔ,o× >+ã €ê{ï„ ªg•ø3@©\Æ"ó[Á(שÌr…u-'Ìi‚¹,I/ÖVÃ*INMˆÄpAA]|¬ 5iîArŒ’#fï%ýü†‘±9œ~ll‰n€€hÑê_z6šA\üùí¿Dl cc^꺜€ð4ƒDY;ÿñ´^ý—ÓKú?"7¼‚ºÄÉ1âFض;fÈrQV«"{ø€/Ôå˜öDH4¶»Á iÄ"¿ÓµMMÓ—"Ô¦Åf5Šl ; m„zÊT©üW)y–ì~!Ê 1UP©w¢Ò‡ˆ:Æ%.PeÈ<9õ” K9ÀðØR('Áñü!–zHjžï¸úÐË ©v’› ž¤gž?KS×õoCé³ ·&$ ¹1–ÄšeTjäÞ861ÿ{S”æІ5V¨E%êÜ.B°@è¬ä ÄQ‡flŠ`G²¡Mð^žÕЄTo¡ ŵ‡Ü1:9aÒ=9¬ç9˜'x¾)ÄØÍî·ön÷Ô0š}* ÝX­ûé’‡OÒmpevÜmqcÊÄXÛï)Ò~~Ò]1¶9v 3ÜC;nSðO<ʳ@8ÿ2ÂÔÁ"(æ«P­ñ`ž5¶&Ä{u’é‚Ïìà¢Ðgm>bøKxa«B;’ÔSGr ;õƒ³h-Ûœât:|’ñßÇìéßjC» ”ÞÄ…¦ä™Ð/Æ”3e²aÛBhÎvϪu"„îh….aZ<¨Ô=9/™9Yú²[€7F‡P`gVÜ>ìœ &?Æ ‹˜¥â¢zÔ½àǤYNÖù³ eë4’^ߎCú:ñuʈU‰ó˰´³„:ÊÐsõÝ™{߉d@¬Ù;㸗-9ã¿ ¼ÒUŒ‚N½e¥ža·lƒé²}þªÝò«bŸâghÙ4˜í(Ub Dr+ÔxK¦óí/@`ý·!ºDðÀ²Ø×ÉG¡sž AÈób§é,Û'¿›õ-Š2ˆöF¢XúÑÔ@:Z š"û`ä$lX.z“ãt+j‚Ò΀굚^¶Š™Å%€5ž·•;ÃèêÞÂûiaä©XjNm]êw0Ï]"YÝ‚- am µe¢$L»LRh0óm@y$çˆÎTè†ÿùÙ+»Ã.„£nÀ4[â_2W` §Ìuä{Cžq!©’ç®oO¤T¡_ÈEWù[Å× –Gl5wh–X+nƒ”#vGj}t¿— §žT®7G.Q®iæ];J@*6EÆŸÂyü2ƒU8HÖ²¡›â;Ki.Ñ:^DÏ<¯Ûy„½°fêË¾Ï ß„šJmÐÇã·²…ë˜m’ 'Öm-a¹|ã¦<Ð?ËT΂¶¦ÚŒÙ®Š²Ð®BôåÀÌÞÁ1ׂŸTÓvï~”‚;tçáfÖ¦Ò^lÌc–3êã˜Ò¾j„¬Í¹®ÝþÆÐN‰åYµ¸&âæ˜õù¨ÀþýˆŸ[xt9û4e.Ä<Û¢KÂÙ§?÷cJ5ß _Ÿº" Z²MéßóøÊ9OŸªÍ½Í>ЭÙØ®n¼êƒ÷-à3IhîI™Â’zrËǵ…dÂ(aÕÓZu4´9½÷üEkfÒ›5Gwj¤|ã¬{%öT}É}CLÿû–1¼Œà7.”ÌñÐùá ±þ7+N5@ŽDßTÈîûH;ø N5¤ À[Iüö×Ëkû÷}&­,2k´ûü»Îa“ßZHK,Ê/Ëî*â^êãó#,ÿoí >ξ5öÍÍí!¸'gÃ[FË~Ë/¨b°óHíÕ-&!‰ÐW¯yª’bB6¤‚/šÞòâ`íÙÎ ÷Pí_Øa°Ö!ÎÊO1°N£ ¯…µÔúh"› ð)3•dÛpX¸xÇFɆ÷&’mÜ o¡oìP¡F{Úœ?hBç ÉRXÃ[ÈÞ‡úZfU­6wŠ=~ƒ¯²ÁøQ6WxX1°#œï7¼=D¬ì[¢ ?ñ=g–äV„Sm~‡ú›ü«-!ßZ›`\ïkwörÖu¤¬KÝõð­°÷6ÜÛœk°¸I«Øàï"©Æw­@úQ·8Ö›£­v‡•œà›o{G¢ ’æ}µQÏCœþÁpxkbûï” ^nbZð½ùŸŒöy^m…‘·E‡fwÝÁs?=é#+û<»+«ãðÏMÍ‹´¬sïϯ.C`¡ù*Bsƒö|ø—  0£¦È;|ŒÜôŒÒ˜Q€¡aŒoû-®}­øî’ž¸p.Ï. ‡œ_Ü¢žF>B)ä;BÃÀ%‰â«µZMŸ#ª&/În£a__„=ÃÆùê"žàªoÖæÑ73D,}õ+«ëñÈ£çI¥°±Ð—i*j,µ—ê’ìÊé4•u:MhôšõEl®6Š£±ŠE××–y„KECôçBúãY °wÿèrvÛÎmÅçcÄϵü .4úA/'Ä÷Èî!žƒ»/s sZzupúøï´8УMdæk¨m¦`ìîÊ*GØ{Á1W ÄzWasbqu‚ ìñl¾Á}w39 „~²O¨öeY¬Bzr­mó2Âè§ ?5’¢àH"€v0MGœl k´6=G55¿¬ Õï¯W7¾©øðÔçrzpB§%¶‰ó[l¼å¨‹½‚ÍÍ@›Î£†v~².´Ødgo ÓñË•óŒ %µÒïܽ ?;¿ºÂCgHÜÿí…¬AM@{\Àݸ=Ç»øsȯÔéµÙrÏÏ÷×RÛ` &ìNø½Þ|pêhã£å>SÌPÈ£Cìs&ÂYé(¿æ'>GVU«ÃH«N›Ý'Ý’o÷ôPß.éÏ{ÆëÑÉ&|ºÅÿMù=6ÙÈRš…ٽﭾ zÜ:É ÁÜü¼ÊÇ⥚moÇ©ÉÑê›j3žóS£“:Ï lKóAÇþúÓAÛÓ9Ý:xð6—äwóÛ"ã“7òæðò¶VÑuÆN\ÌÌ—”ÐTD +3Šâ‘¬‚øÂŽåœˆÀÞ„wpmÉ &HPÜiñ­ûBŽb3 … >ŽHÎD"–w‘¤ÜpÑ8[?Sì®~EÑ®§Ì±L``gæFH÷äžVƒ¾¸W"ä™™¶3pƒEõh^‰7zý÷#mØË^)¯, È ‰Ä Ž?¼tb†@™ç??>€ ¡º¯üïxä»ÀÇ^æïx2ògØHHà#Mhõ þ”$›Sº0£ôð›Ä%•ë‘£Ž<£šë*sÏLÌ;“¬‚]™#—[ü‡×M®\!Ÿ[rû’PjôáD†ðÑÍĦjuT ^kÇW†´FV `c‰ÎK¤Ö@™»!&lâV€s†­q§é½a‹  å@­·Õ£É:-.ócîÅ=`çä™—ÅÇm!¼³JH. wàö‰AR [Àr,Ý+Ö(ÍCx ä‘E¨±CJ‘r Z€,Oo¸ÿZsdqè'Š÷U:žµã )ÁPˆ1¶àæÐð¯# ¤˜‡ø2æé€Í%×¥“ƒ~<0²!u¥y¦éŒÁ–g®³œ¼W¢L 4ÁRBp¥!µL€¸ƒ°åxBlrƒìcB8*S, «H“(³ÒÃRƒoàrÊëæH-v ºt+_-«Â6ê+Að€€&ñ;B”dŒyÐçq 8à H@”"„‚` è‹"b¿D»ëñÿÒœûÕÆ´¹@H_†¦ J`+Èdî“ ZXD0Ú\¢ ªÃ`¨ŸK^K^÷y÷߀JFY¼2ð1‚yï í±/o©€xÛ¤L×òÏlص%ĈÁ ÈD4b 6#_¼ ÞƒH‹…Ç£ãææ&X]è„ÍÙ¾Û‚š¥¹Û‚™¥ =O¿«§oìÉëÕ÷ê¿ ƒÊ¡`W%àšhê_œhM3L4Ÿ›ÄUá ºŒûEÎ-¸'ˆ.dnŽd6[µÐÕ)m·Ë>©qÉ(qÉ(¾dv¹dŽwÉpØÂkeé…·²…¶¢…³‚̬#¥«‚ aQñÜŠRÄ{‡Æš-u öavÝ_6Â*rÏùr‚â0ç†Ã{Ò½~ãšÙg˜mÞqà ^…hoT…Ê}6öbód NP&”¸ïé€7GõE·gÏ ô’iÑEKNk0No`Ö?Iõ0&m¨I$_v,Ö¦Màà s|ø,7Oóv[$^ŽžˆO+¬‹!d òŽNîw£-mÉ)×ãÀ_Å»å¾þ:S£ú>To÷ž.âÃźPð9ôÞ4WhóÕFÓcÐÄ©ÁW¡ÖÚlÂ?ÔÊ$d›ØbQ0ZdI,¸Lïe࢟i»I1f°±%µÉxqzKÑʈ7‘*®^ϧ+¯|O)S‰2ûÆÛeý}FZ=Û~¦Ê͵g‰=A"¸±¬Ÿ‰ÿõôN£0…²²‚c!¶°å¬ÚêGY­!8žÀe™frÿö18&…æööx·%Ø'Üç¤ÄýŠÓõýØÕÌòÙ½2ÃÁdG;3ÒÎŽM1#ІjßÕhol—zànbFÛçð Õ×z0u`jíûæÅûF¯µ»­çÃñž:_ý³¼2¯£Õa]ȵáñC_1rñ+ÃÙÜÜ]YdSïðUçʤy]»ÿJ×ó5ð1?)ñlè7íëìac´ø3YXŸQ?eÕIùv7O¹Å]ñ2‹¿ÌO ’þÏ»J³\òÏ/¦vUƒŽÊ‰ o ­êyžúmn¬uó6Ë31ß¹¹ù[B¾T¨¸äoh¨C¼Ò+} ¦rùùÏ  •´õ#-›ö4,  «e>¶ð_/3? ™‹Ë„ÿŽÊøLBf&F¦ÿ 3ý¿¤X‹þª¨‹£…‰£¬‰ÛåW³üW~5# +++=#ýÿ‘]ÍÄø_ØÕLôôÿ+»š‘þ¿gWÿ›ù_ÙÕ,ÌÿG’‰‰õÿ‰] cD úš50éŒ ,( .â§¼Œà%QÕ¸÷òìnNåè9² #‡f¢ÄbY·Ÿ™. ò”vrnR‡½Žz—6.°ær¼kxåŒÐ±ášN}QÊ ?àÖʦÕX7-!Ëæ5 FÔs1Êor¹r-tI‡! *ÒOüT3ëTª×(¨×lS´‰wÕàBâpY0ì;-!"dº IŸÝ!'¦ÐÿÐA $ÿ°«ÛUš EÇËuX°%RHÞƒct€åqÊÁñ‡,G)wÝkÆuŒÂ!0K!”ÿ¯ÔÑsTœÜêC:#øQÂÚk÷­áô=^;)þ× ÑD!Ù/̧Ӄ\ß?Â÷û…Ÿ¨îwý? È?rîŸD¿“_‘‹cx'nÕ¬ÏÙ¶4wŒCÑcº°\DY3ù;/¿§¢ú ¾Ý·âáÿn¿±üÿFEþ?í¯ÿ–‡Ìø?yȬÿâèþÿß þßÚþïüݧÎÿÂßý÷Nü_ù»¬ÿ;—íÿ‘¿ËÂÂÌü¿ðw½=¼µ†—ÏÒ½š5¬×¨×ª©ÆÐ=Z»FÂk¸ÙgLÁ¯'[¬#Ý#‰GÓbã‚2€l(>7P%p€$óâ€hBRÄzk¡›ðcXa¥2µÄ=@¥n§¶¬FáPÌ?¿}é:¦²™Ìf¦²8.£„qàd?_à Ëì ˜‰tüñã–~² öËíªœ ¾8ý?úÃóç{¹ë¯UýƒiŽ#…ˆƒè*d¯ÁÅgÀŽLÖn±ß΄uøie—ÿb„_+u¢m¶îGº ~BœÚ»޹÷Þ.wá¯×û`·„(ì#'op€¶6ñœ`?×¾µbnÖ¸#“ެö4að¯:lð!>ɧšá€ž0l˜!ˆÝåç ƒ¨4MßCKVKÌ] =KÜýîŠy§4ë& ß@ó±©ï‘¸Ú~»Ú„ö«ÜO;ÆóÒºØp®¦ Q)÷±íƒ®Á!×!ÂñwIé†.çñWP žd¿é£–÷¯ÀêÇôúó íÈ€·ê¾AKÐåáûP¶)„/$ÓÄÂ(¼C)ªØ;}õ Tß ´î%¿UÞcdë…©‰Ÿï½’«õ½ÛCjCé›*Š[Žï‰âцÞ0ÌÄÙcòBߊèìÄÇ 1ºãI§"δ&hMࣘˆhï®ÿêÆ“‹ýÅLˆ_¦‡1ªÃ®ÙÞ–¬Ýc÷æGR[ÔÆ|¥â•pˆ]À {[.c”̇,vaІ]ÝsóÎ… ÎUN M"q"Ú2Ù⽉ýjþøq‹`;˜¾?˜Í›Úba¿w$]º ]Úˆ.ëü¾£Ç %ÿBØ uÛçI:9âtŒÃ!õ;\h kË¥âUì)ðfóuû"šËoÒ‘1yS.QÄ:2¹á—᧨¼s{ãªOàtÄlϘŠÃ›>B{°rŒèŒð ÷Šî ù…Ãß÷+#ˆ€™¸Lª­d­ð©×2tÚ´wÍÚŒtm29Ó‚É—Ù qÌ Ý£8º¶»#œþÛ9M29FêÈОtE–ð9F×îÔÙæ`HæÍz{ôøf…Â:Ì;ìGé[Ü%ì¡3…{“ ÐŒM£9ÈxÔ kCÿ·R;pÖ…< øÄò’£i£u.c7 ÞˆÔÌ£5wºì™Ö‘ð>Þ !Ò;芨÷ßLVˆ“=›£ÞqOÿ7ú+XoPoxo"ßÐoR,ÅdvP.« Û[î]Åt…Õ2Õ²ÏÅ' _UÖçþ–UýPE úå_D/îÔ]ÌÜ?_¿¯ü·€7†“rî [ÖT9¼¤¼Ó¦~Í6„÷]8®9R³=©;ï›Qœ@±=¦¶ßàßÀÕ½Õ·,6…F?}/O|Þ¿‰I…ÝKqÕE±0 Ž^´d­ñ›ãÙÄ‚9FÇ JL ƒöÇæåeºÞÛQõžôüEº>ùèIÝ*ø@´>æ6·™Ü×ã\:µ¢—©ié»þb)³à2æÏ7gÈú É o×j³j;˜uðjk©Ïð?p‰ú Ñ'òó‹oEoËu\M„B}ÇÒˆå6¾ÇjŠ=%HíÄþú"V°÷jë¥Ô ÐstÙBëùû%¢0óXøüzFêG€µ&g»ÏKàÚ–~Õ¿ý›óðɼ_K<3}Ïæ8xvÄ‘0ã1&Ùc¸e'Ž/ 觘Z)À§˜¯“½‡w>ž}§t¼øUáú£ G&ûŠHgOS”] ’z¬‹ÿéÑ…©ÙæþEµ½ÅEž{7ùÖè]?PÊþ:8ˆß3”>®ö¦`¢öEW¯3AgO¥/ÿžâÉ!í4A¨; í 5Î¥M:±šæ™”¤q_ ÏwÄ*ÈúJëOKÊVàvt ËÉàˆãOø°@ÉÀÆbˆöÆÕ _hÊÅîp¢àŽVý­u-Zƒÿ8E¿‚c : î–Çò¥èoÁ-`ãkȺ&ó e¨TºÖfè1Øðѱ‘Aig"â¡a!á_ZGÀ7{=áÂyHCÆ5™‘nÀPš¹UNzDìS ›™§P™•C$¿t¡½L%¼Õ²é1Ã1ÜtýD8Ð 1ˆÐlI!#|fb< (‰H>›‡B"é€ fNN‹¤™-x4ŒVŠ@¢ÅêJèíÙ¨õ7ËØž+¨wHŸgÀ• A“fF0ø ª:^…+ÌLÓK˾#ˆÓ¦±q½–†60H73øé逡®F_ºÖ%¼cÞ<‘R¼´™”ñຠÇß»Õ]~ãÁ‘š´ÊímÂéï™v5Ñž(ŒuFD'áéD g6ûlûº³µ)ôB°³Ëgb8vpo­Î?]µ™5ñý.ZN¤b´dÙ–Nj*Ñ_5ßÓU¥øó¦ÓÓ+CÆ|°;Ñ^7Î;;±Oû곬Tn§(µ?"•ÿ¦(7ȵþ±™¸µ.NÛ‰ t*vr?)Ð];‘a•Ù4cb&º_%lTI;¿rÜ ŠV5k­%¹ÔÇš²ÎàM"„”òŠˆ„ÃR$¶À“–L þ6ìë%•Ío‘> N":Åj ü;GG@¦#»%NãónePÒ‹6~·(c¯l °Œð—˜ùuã‚"'&–\¬”þ‘7o#›ì· å0‘2uß'€%ôsàǃýà^ º\§,S!~ìj¥†’vóøç(‹B=ÿM_N$}óZ¹rUDälPÔxv„¶œ†?×mHQÎ-ø²˜gôÄK:Ób6ç='d¿[–yWlmºe¾ ýGе[+ì’ˆø[©ã[ÑO+¬Ñæ0…6ߥD-ÅFë4!eI¯sŽLÑÈ ÝîØøoãC:Ã%ƒ3‰( í—Ü0µAÅJ# ¢ªL÷©ø­mõö¸Èáí‚B'´)“â­Æ†nKQwp÷&¦j[{Ë Ó‰O%ýÕÔT•#T¥©§™÷ü[o©¾ý N?“®±iA³›náüÛüeU¼ŠÞÕˆ L¢g“*œ*žs‚+Ê8 ®VG Ʊ—ŒÙ*µXÛÉNò6¸S}íå_FêVÿbš€–¼è¼âYG¬ž?@WÝP¡ú×?䀡‹S‡Obfúì_(‹Y³ ßT÷ý‹)XXøûcà”.IØ(ÑyÊÞ}AªÕoþü¡¢Å_þ‡…G[Bz~Ë Y–iVêfîµù>5€Xˆ¬ÞÁµŒ^<Ã!tÒKwñ}øŸâÊŠò8B`¶{k¾L,ÝEµ­P/Ÿ],\Ž­&Z‹à6 ¾÷egQÃ[×ðB!Ådz~þð [ÎN´˜–˜›¥¨Øw,%%á„ñLÇ'\Õi11IéŒgÁŠ s0(èh¼'_(òZô%ëG'$7n±«.¯öºË–ƒâöjIÓö©@-ÛìùEûUqݹoì2fq¾ïŒι7ÔïÇ:‘y**^Jb׬$¹'¸bæ¬U4úï3à$î›ð{r»©ªyl˨«ª*¯bÔgŸºrZ]Y&CÍÚtI;9ªì,]d\L@9q’(ò^»‘q;k]½ßî~?A£nk}Ȥ‡ ›ž³×¶ø¨ˆFxím€{ú"r5Vþ~>íþ>[x†2Iµó´´Ó´Þ®¬UÚ ù~_Å–³.8Å­¦e§Šá€®Líj¤¼²$Ü…@hüìFŸ¦bN‹<}±Q6Åô¢ÙUÿŠYP‡»¸Rc›þ C7gBá½’ñ™¿«š$§³™ð|5Gl¢jÇL‹æ-ÿ¸ú `6!ÓÊðß#y•çœéÎÇŸ(–ÿåħ Óo„|aè3®n…ma0ʵOBî¹o…žc¨Ò&û»s¬+@]4¤M GÍ ˆ¨±®‹ñ %ɇ¸ƒxÅôÀÆØnX=|ÈеÚ8Hä3‹{{Å «F%ŵ>º &¬t\´Ø{›ûä-¼Ó›Qƒº$äéÉW)W\0ôËefpô\¾òHì:IåD>þ†·ÍÁ¨xûº%óJ¶z¾˜ØLÕå Ê JeàZ,øˆ¹³\‹6@ÍâÝ ¿eË,[&[Æo¡ £o!Û¡ø>2%3ÚM«'¹¢†uŬеfœ]ÏIײT¡ZFZz|¹Ü£x!«r®Xä½Û°ÂSN+L­»´ýBô¹ÛR6;>]xÞƒ #K`ˆ!³’¿ª/AIÄnÊÄ]©^ç³òÓòªÔ3sW°ó6Ÿ%Ã;(? {bRÞ˜‡¼Y6 @°rZ¤¿|~7‹Ö†-™ÏÙîêMe•÷ ×°û Âg·g/+óg¥óä£×Û6üÛò7àÒöÏû'¾îò"~çÙfè[¬"w¤¢9Ak„šùÓEûRÅÃòq6,z±\&­Ä®Ýï×vR]{‘Óº¤wf9rKÚ›¥9½ÃËW §­P¹ÛIíìnÊoϹòǧËE“ ,ìH5WmK ïvU3 ƶÇÊ1Ir2keœi3ž 0ž‹ÇŽ©µ86µg¥ƒG¾ =Vd·wÒ¡ˆÕ hR¢B“±{tAù„©¶Ý–i×¼{­‚E”£.Cý‚ ª”x¸TÇr„*·–žH¥øÌs¢2º‡)ûc·‰ê¼gnr«)µ\]F´©¶2×3LóA(XR+±Í—þ§5jvy´u 2n%|ɪ›…ß­qì>q!wÌ>ÄhÁ4v îà§‹b¸Xx(lÊH¨è†:„´ºâ´Kè)ì¬^¶±yµŸü÷žfÁ°ñürúÜm¼z+Œf u‚‘K"Þ@ešc¥ÕåÉ•Œ ,ÙÐ(êå‹® ¶„s5{–¯FøÓs P\ŽK.KÊLj§Ðp¿]µ?hÙtEæ2!§œÃ–j?!˜¤Ÿ"àB2ç_1íßjœJJÖD':“["²b×íøC;éê;®dåX­¤%GO%aVÜÅ5—†»Ý̵_ÛN½Ú‘?(³áÙ“¿4ã¨EBÖ›Ò5r]\÷ùCdOœJØU’š&œ¥ö¹ÌéÌÙí|¿O‰BfÍHY¸Xû<$Ü9…÷Ây[»þP@.0M›œ—51&6¢: *öN­Ãµ-Éôib×¢ãB–=BfI´H[iš–,gW.ï—I'h¤ÛM3£à¿b]IÊLñW<¨“dS6›WjÖ’þÑÝ![z--O¾(üb„™Ì«’5cFÇ«,›@qRĦ .15̸ö!KTg¢dšNOÞ ×ó»ã6Ã[¥õXT´dç mUŸ­Ç5'Š&+7ÈŠ‰!+÷™cÇÇB/¥·¤~[¾ \­¬$†M”?O¶ô’qyîB,«ŒR¾í²F<ú¬.üì褋ܾ/xi`97µó¬âß\Lâ2k\ß©G¼›+)a9&â£9­å©7»ûÀ­¹Xh4¬o>qÖŠ˜,.ÛžŽu.ÒpZòpqº bëë5xß-q(dùAcè#œ˜ALZM5Sáäö †'3&Žé™mpý”0Ï¢¢«Ì§–ÑüÎ3Ó2Øo0 K™gCã\¤ÅñíqĨº L+Ÿ!>uÚ’¥òS«]òÑ'=1!å ”´„·X^."jv‡šº3ÙK Àg¬Ë–8¹£7'å6€'ƒ(BäFÂ%¹ ­EG®R•·\ çÇ;}ìHϺ‰£I'{çV0#ÐøKdšPÌ3qœ¸P$¯‚}ŸuÁ±sÇËÃÏŠ±cUô!%:‚<:º<ÒÊ‘ÿÁ ô\7ú—NB̃öçk-Q#Ýòô$Õ-ã*†Mç¶Ålål\ôø.ìõ½ýa‰=§ØQ²Ä}K'c>=7Í‹ð¤Ê2jeWiÀԤؼ­O’Y³@v â” V±Íblbâ~A…‰±›BR0« :M+qgFŽŽ©Uëi(G"à8T=R&7©d•lÝÉéâʪùÇÝ´F¶Ôí˜ì–ƒÙ¤òÕYŸÒKbz}IxÛÌ#ð°_ÚQ 5Åãy)ó¡RýØU'g¥.•¯ä7üG‡'–,Ñ$Ldzq§8Ú±%hvlæYÇáÏ£nàøñ¡Äì6„°¢•i|<˜¹<·|%H:/P™pãŸ:ùáÅ¿é”^@­?ŸÝl5ØŸ†Ì//Ë­;RY`NõãJôu-—«:uf+Õ]5>{[µWšW^\('J%¼*×?Ÿà,^Ã:Zav=H×O_o™[y†tGˈò#zÈnÄ«$‘ã.»Ê=ƒ>iCv-3}§¦«ƒ Þ q4ö÷sCÇüŠŒ äË–c3õ SUW8‘ŒḂŒŒ(P?¤m±D”…pK–Zž`œ.¢4|DÛ}Õ'6Üè§|ú]¤š,XNF‡–œë;f¼¨¿Éε9%Õ–6æMš•¬L6‹ŠÏI8t¦ä¬¤ÑïB¢Š“Ú•È`GÏϯۈŸEÚ+ë÷ÙuQî—²·/ÖÂeÖçøÍ‘ ÁdSÁcW“-K³¬-ÑÌÈ ÁHfºnn’:_Ì01ASà>Wdš]Dš¾[-G{DêÙ^l·®Zª’”Í6äGTXn°äžÙW³R™\£!iÛt0£h§ ´ž:Ä>dÁ3ƒSÞ º`2RU:€Hºº°íL0Ã_s KK¿yºpÂ^OìýÈð!’RÛ¯T^’&P:ÇsŠGÆsÙâ{GžV{mÿøÅØEœ;L¿I,Ýb§õNì10´"4Zš/¡Œ£eS´óÊ¥×:‘îÎ<ÅW‘mþº¥…RÐz&W¢r‚^ÂTpTÊðÕg¸lpRrÞ‚ùòÿëî»ÆáÁ[þC6p9Ð Ô«Õ†8»8 bŸ[²ããt"=O½¤µC©:|s·cÆaºOš‚OBc ¿ïsAq»pØ“Ã1Å0*×à5wŸ¸Ð6Œo ±s^ò"K$×â’ÎÄ|¾„;Ç&yW3HË©l¶…=©3±ÿ‚jÕ0I§÷9‡zªm.GÃÚ™Ò©ÜÿÓ¼Z%”¼ .!D‘cý³±©¦³ hjWÖ³/up·ŠNRtÜ2ÉZ;æÆüuU@¹«"É[‹2 àHîyÿs€þÞЖÊ&Ð10°ÉW­”\)뜠hVèÔí”é´å‘þvŠ;3šmQ ’½PãLé4y•Ó°µr¥:j:jþâoDïf¶Qt8:+Í×Ú{õ%R3M>ˆ,àÑRQU¬Q¼ÁÍP¸s‰ý¬[8™ž‘rélÈí±¡¬Óºië¨z¯ÆCÆÅ`Þ€Ü ½rE]:~Ôĵp寴l!9¯(RC5}ÌŒ…ŒÓgffbÑ ‰µ°‡³×”¹àˆZ~ÈÌ0PZùšQð51š•Ê1S¹Jxˆ£à;l‡Çà殮 ',ÔÍ–žT0}GwêI¿-OfmÝgEËó> .\Eƒ $eÛù¶òå_<‚P®j˜¦Ì6‡ *áéÙ—Ô”M<½PÐníÛ 1kVqu4_Sk‘-sÝÑÞêrRh$kõW‡Séÿ¢ìƒ†iº5Mø¾lÛ¶mß—mÛ¶mÛ¶mÛ¶mÛžç}¿îow÷Þ3{&"«*r­Ì¬¨ŠQY?Îó°¸g=38ìpt•º Þ “ŽŸž¸ÆÛiá%ÚMÊ>v0ƈ´ÊÿŽe[Q Bœ9>ð´Q¬ŠÄÉð„rTÑù’&;©RPÉA½•?cq(”™*[0˜_†Ñ¡ñ>§Þ¡¬•©*2N^­˜^iØ®m®=WŠ.y.±.™NŽXÄ~Þ¹¨`á°L±¦mï¸@u0ÝÀ)±,$®àÉú{•SV ƒg#ð­& G%^Vð`5k?«Ð åtD«dâpܶ9 K1s.fpMúRv:MMJnáZ¤¼œØZù°0Å´-zOUèjj»_8¨™BYrž«5q¼¸(ð /kÜcY‹»eÅÄŸx Ý‹â¼XÞ°8j<À­µ‘”MXsäѱNbŽÔøq;ŸÖcFç0dî1m)œpKêĨ`8%Ug8Õˆ¯LËž¼¡K3Xg!-ºô ›-\_Ë«nó5'õ¾ä™9t4'+WÏΚ-s›y¢æpoзØþZW”¨ 9õT³Y®;¢Ôæ5K« $&ß}4#ˆ$ìÁ¾³<êuJf“,Åá@XD†¶'ç4„3*&>SÄÁÂ>Ý"æŒpxèán.·¾Ž!»ýˆÍ¿p¾áÍædX7bSUŸ­î¤~™™HÐW¨íÚôßÁ¶®ØY°Älkª*«y]XÕtq~3i8/˜N6ª¤âá[LJ*p™r®ß!Û\I,=ÿî:tgÄqGyœ2|V>纬Tà1°Hå´p%·Kpr¨ô–ôñQð$”*+øÒ60)àäcà¨Ç_O1”¯×tæ~ÁtM´~$‡@ò2HªÊ9öL,zW볈1«žÈ÷Ñíš¼¤«njÄþp†´Äˆ>ßiÈÂrlËŠÀv”Éw¹ †Z•Miræ`Ö= óðš_x®â53I¸y9t•E—Á”¡)s»otüì9§(íXÈjáNë~)«Ÿ¹¢_‘¿³sÈÆ˜¥{àAz|/Hì¸Gzüøb ì’Së­ÊÓS÷ž†[ˆú0RÔîn4¶œŒ´9˜ÎØ©ª¥£Vk©Ê«ãEͰÔÉ6cBvM UUMîÔ:74T?WïàÖÓµt$ ÉùüÈVßLÀÚY52[“ÍÚ­îåú¤èV¦ôš ~ÿ­Õ?\zQ£Ê꽪±Žä¯àL(ö‚µÙkÌ—˜ÃOvƒ›‹¸£¥^ÖÂq¢ÛçåòQ¸ï|Ó›ÒB‹yÀ Ôbè¯a(Q"ÏUÑ®ÊPvµ •jâx 9CútA½E*®Ã`ÚEkšÕKÊ[³Ž\³=|BJô%+õ:­C!I Æ8Ñ8š¬Œ³f¬†rW Q–ÑIÖgé ™¨ÝÚ²Çñ«t«†¶L>½×MÐÇf!3bG=ÿÖ뾈"3f‡hÁ €Bܱ~Óe”Âç(Öá+¶JÖǵPü0R8§p«CDT$d¬”œ@b¬ ‰Oʯ*/€&ãw´…¶l÷.o«ƒ½’†úU.>GÄXyÔ”‚·CL2*‹G*ÈS(KÐ9ÈĨ2zÃa”ºïĨ*x˜u ‘?ÁE¯2yÛ¤þMïÇŒ˜JÑ2E¥g÷¿`sl„.ô©ÆaŒzZùçôÑ|“ mÉC1X¶[!òm’Z%ꃸñ€’)@½nÑ B’?rìYrÛŽÕ>ðÜÒ¯ñŸÒ1NÒÏèÔÅïs¥ó”È«£l“’çpaX<ÍŽÓÿr.Rô¤¥êýó¿À8£¸E>È.kXå#Bñ =còj”¶À7¬‡5ø+Bñc =Sü˜ÄnŒ~J¤2v·ùÏÚ¢ZRDãÙŒUi¡,R?Pÿ, ¸Þ#¡“¢aú!môT,‚!âUžÝ/EW|ùÏ*tÀpêc’öÁîúT¼Á¦È ›ØqE ¢§ÔI‘ð3µ€/¶ÂÝˆà›¦â/ Üõø:–,€ÌŸõå ¤iåAØÀ¬,xË…Êb OÊ ²î‡!T±í§L‰X¼ ªÔ!íµ‰üù§ˆœÕd2ì‡Þí~X¬ðˆÄ~.ù «êŒœx²FU`h®ÄÛQï²ÖfÌÖÃ!Ê×ç·JOãLØy|€&šÞ¢¾}F2à˜¬´aôð†Aô¹ÈG;r@†>ä…´X¦^ž õ2‚ŽÕÙˆz¤G¡öL.›=yjxÆž¤9Š¿Ø-Ãîy©gU ¸Å[Xb.rHã\>ðÞÓ¥;(F–ç:Ö #Ò£C0àˆÂÒSŽá: Q N¸EðLÜVßåÅcêyä7«ÿNËs€¦D]ò²6Öõm›û{§?· m\r Å<6ζž67«&à®2ú€áD3ˆ‡ØÉàe²¹0’ßêGÜoÄÚrÚ¦é{YÕ°’=w1ŒˆB|ö~¯Ðgè§“án:5ˆÃI¹x¯]˜ºœšzE?ïF¦pGím9ÕÈŸp黯ø¦Ãó®aª³ø¿§ûÆã„휚[‡ŽÀRÏ{‡UÓ¨§ô=1õ剓²ð%N–n æPçÿÉ,ô}b¸mEâ­éFL1b&uälñ+D0Iˆ!ôiKsßF_\‘a5åA´ ˆÎN­0æRWÚißCRZp¿Þ i¡[ÛJÿ!ß|OkêLžr îÄÕÝÓm‡5tÈÒ¥´?M{!¹Ì1BL1É0òL]Ùê;)` A¸ÕDVeÝÓñ¶úñ»šõ'Y¡3YÇB0à‰¯¶I;Ÿ'«ÅJ:ÄÙÖ3÷qÿ•³)~«ÐS¡8¼éÌ³o‚›?´Ñع'Žl^R¬2M›f‰Efú4ÞŽ:@_9%Úþ£©¯‘2à¶0:b“êžLü—„>èQDˆÆØ¾*¯…öدn‹©‡8–þ3C«`ø=Ý›º$–.ÃÚoÜ6àÜÈâøO¢\ܸ‹N- õoh|î,J19ÐGR]Yý¡äQ™âÄ™¨ý‹‚W¨:qž×¼ xK};Imâ¡h uÔW®¶}„¹/iŽŽ1އ8™Ç7%õ´ÛIþÇ­úpØp+a Ðî“R&“ï5ô&„Ç\ƒç­÷àÔa(Úp„úÄ"Tüû£ñúbä˜FŠ•%ï‡4“g¶nâÛibÿ9×lì‘la¤CZ¬{ ‘ºáÃïHpî±Òø@Y¶ëd¥üíÅè//j7ÃIÂÊ»hB:_­ô8Fꇂž<´ö1‰,fRõqâ ím¤ÄT/qÂÌçÕè*V‘xœûÏ‘àÐY"K#»«°%Eª×%oÿ‹<É©áéãb)2Þ$’j*¯»5å÷´CL’ÃÄÓ¨Á¹ùyº\DïÈõéƒìá{ôûLÜóßðµ9…¯W°Ïû× Y+ªãQ*“>ºãìôÓÏð‚4 žE?Ô“nÄí¨ ÃJ÷•®À¼m«P¦”zH÷ñÚaò>f—ÐRü²\0¤ÆÙÕ¯VÍç*Ó¡—î9Æ‚æXhö"Ò€n¯î[ÌÇÚÆèèÈ`bÆh Ò¡€.;=/©TÔS*8MOÝ-S˜Ä¹‘àèÛñpP‹!{MWMMº. ‰àâXX=36,Dæ¯]©oýÄå—â@ɽüîZF-ñŸï7FwÀ’¶oz¤JÒR2ÄûµÿÏž›™½ÖÝøQd9bÝ ÎðƒG¶i”ÿôèWaÏž‹¾×â+5ð·¿ÜÞ¸?ATQ&WLOÝ*çfðs` I€­ÿ]lЗ⠘J=’„¥ùY&zŽNÔÀ®’žUX LžADpL­Õ_470Pþ}=á~¿¯7yÕ3Ø—¡>²½±ð©~P—\Š·7‘wíj??=2ª’OîI)ËvJ“ýÖáÆ}=ÀÖ|š/%ó~æXd€–¾û5ðÀ@1ô›>Ã/¸_JRÆÔ»«!Ñ¢`n¯¯×]Tìû¹—¼âa¼=g téççwþÜÞ¸¡ôÜG:´;–1p++2úc48ÐÂÐ2¤$º0 x¢!ÐÏ^ºÂ˜P €/ŸâßݧtFê+YtÌAñQrdì\lL‘nðŸÈÁŸz9½R<¶óR a3Bvua„kl¼uqvŒ¡yŽük%‡ÇA\^ö- ]pc™SQuB8C³GÆCòv€|¼ažu<‘‚qxœö~¼¬»P2ï)Ö¨·‘€%G~D&£&z^Fü Å²}ݹPCû»<>â'i‰{˜7Ò³¨}{ý°›A_û^_£ 7`ë&:zDpÙÃD$"1fp8(¡ÃC³ô±É·3fo½¾Øˆ,hq—6Þž!tÌg˜wZˆ¨w¡eàï3}«(ITNú> ëxÄOÁÃ(+|ø ô{iÉ1ÿì=vþW FlXÙ¥Ôié䨉jE ØØ˜¤˜ó^H:k(Åå|ÐrÑ”¹™Y» >AšxõÖÇé –iélË•Z¤•ZÄ•síÕyJæê¥ŽÅÚeËô[jvèàÐt”¼ 9³©Ü½+U€-F‰ÈŒÝ§7bœu¾žÄ`Àq£sî²Rù}^!ÌLÝ;oÊ:Ä|¬P˜O9õCo}< ôæÖ ÌÕ’9{_?õ<£‡²"äIó¹ý{‚^£"¥F¨C9ˆ}½?ÒÀ¥"¬Æ·ZF}ŒŠg$PjèQu²kçXéFÃh¢©à™óDóŠ%§®þž D˜ù²àÒoŸ@f„x»í~ E¬¼C#  €K±óßÔ|ÈòNqã)à Li˜Q F’¦NKtBú¤Š“†c÷Ðs‰ÞDþsôÚ`§ C<¤á_…¤% ãRÆ4Úaã†Ó{çÓòtCzh“y‹OŸ†°Œ»QìÌܼá3˜O·Ðz‚ù¬w÷׊œKExßJ/­u¢FPhà…©¹+ Ûi²'w‰¥ÆÔtÐg£o+§¹¹à«÷Á—ý'1¤R”ïÍ;€ù%]-mSš*)quí0ñÆÇÇ:t=–”–AœÝ/xnÛ¥œDÖÉW |FºJÀǵS^¹ïÊß ×UÎY_Z†© ¶ºÌ‹§Xó-ßH‡¨ó¯÷±ŸïB_?{¦Ñ\/W±¦¾JœŸ… ëxÉ3?eN/¢Å|”Ùßòçÿ©ß|¼dúÕo¦Ýêb¯éIöÆq߬´`†RÛå„gߟ̮¿»‚¬T_é‡'çEŸÏÜp¹Ý§ÃK¥$ü§©hÅ?h½åo-–î+§ß6R% ÑUß³;—ˆÂ%ê”Èͳ˜íA-×¹¨vϽäD -zH›|–[ˆ0ô2^Ôœ1Þ°ïÔŸ†öll™ÎGh‹\¿nvçÎ{»ozKz«Ã{÷^\ Ûsï8 ³ï ¿C54©†ÈWrUfOД°õqÒo"x¤E~ìÛÐ&¨éôå÷ºba Fùfpú3`Âc§J4×Ëή&’å´©B!#©|¶‹Ž ï`QcbDé'åC$)w¬öšbú,ÿ”ú&íriÚJÚ0>†öõÍø¥@P×Ê0…î­ªíô#Žø)ýø¯ïêXz&ô¦Z‘¹î·öKc’n¸P¹PµooÙäl¾°£ ò‰L5b¢<áµ½3ÅOóÿEØ=®Å7éðùBwÕkéRˆ2P¨"GUé|,Ú¸I¸ ö_n” ç,|è!“£F ®uƒ}$´¥ë þq(Ûéî:€øqøàï…Ýü»º"ñ‘ kÒ ‹ßŸ Û.AyŽ÷ïÎ=~+?UçŠ8œ¬‚n¶;E‰hÞmc¸R¹Iu¦ûž®-ü‹>¦ÃÐßÝA¸¼¼5fBEyž¶>0¡$¡ÑØEølŸeÇ¡4²‹$]sœŒªº ‚šº©™ï€¢Bzð1ï&5匢Jº’ª¢²uã1ç¼ôÜ%™ï¤lh+l-,â-àmÍckŸn? ú ÝL;Œô¥óá' TПbU8µIÛbÏ©©Nrs1jeµéyÄJ̸nÑRWÖn]D9SwŽô’z0P±ñÍ*SUùh®£©‡0yFIa<Ã,ÉÊǵ8€u5벘2AùEJÅÀÀ¥xP[E~ßNuåà/Lƒ6g¯eƒÊ$g¨¼7“wo$:yͪBA¹0¢7•«g¸{ˆâæîNöžî>¾¨#þÔÚ‹bw¬ö´õ~⨄ò†˜Ò«çvøÁ2æg²JÁ‹ÍôÈLxåËÖN.³¥1ÇÚôIîÿ‹šo2{ˆ,n†ÐdžœˆpÜxÀtÊ$êO9f½øè7ØéPRµé-æKKDºÎw´Î7ŠÎwêîõ²«Çó©Û³°Žwê®Ñ^ýÑ•wbìÚ'ÆËNߎ°AIž¯êÅY/åY)È·pÀÙY+ĪÀn• îtJ£+†§é"[º'q;b¡£[ÅÒ#˜|i}‰zë*êñ‚Év´š¹‰#n+‹±åØ<ü•²Áz¨0Á*÷2²l^–Àû¸»÷%m“ð¸Bsì̺QïŸäÂÝìƒÌºØæ^ØãªŸ ­]žKì~}×ù ÿ-™pZ¹qÔuŽKÚþ›øõ‡¼jŠ`«Q t uª ðj¬cý™õ,O+Yx®#»&ˆØ¶RkGöËOOc%!ë_2¶‰ëÌæîóD«oÜÏ›´fºÑ”²Æïš¯¶ëÖãÕŸIœã±Â–Ü6ݦî³ʸ»z‘·ºLé>°x1_Ù÷àÍí×5›„“œq㿺üú7HÑ-“ö’‰3‡-Y&}MÚäÒˇFØV¼ý8cp*—TÙ™2VMÕ´x Õ·,©½fšû]s*¶B_…-[‡-˜dš•Cã!h'{'q‰€§¼š:ï¼›'dcpüÒ&unCˆÏ6ϰÎüBvÍÒ“<ùBïvšå½€:Z‹½÷ ôŠ`Š{Åî.êý»³(tƒt˜Hÿ˜[Ö–• Å0?*Ç›Åz÷éêÒtVð ³Bò_”ëÿG†á_>ÿ¥„ˆùCþ ø?°…ÿ+®ðE²²1ÿ{Ì¿€ÿ3ö_È‚þÖå?aýXÿ3Öõ¿Åú11rüX¿n'ìáfz­ežÏ ¦ åÂôõæ(æBã&•å–(ë:è„âxÍÆ+`ÖÊ~+1µÄº º“á‡0t A9À ;Üðlć:ñ,`pÀôxbïÖ Íè®>Ï[¾Óß™÷¹¶ åíŠçÕ ›hîLiB P5b—› ô­&ómÈ 9(p}ËÓß\4_ؽDpsæíÀM×í•VÁUx…ž¦nÛ*Ï(8+‘1î`¹Œ#ÑÕ]^_†P¹d€òKÿêFC‹26~ž¥FË£èëþj¹[Õ]~ÓQ¶Üì‹:\‰«‹6³º9‹?_$r‚¾£5F{›•rhð”í¦vznGæÈBe¿ |i°3ÙŽ» e¹‚cŒ°Öè¼G`+£” ¦jÁ Å+fÎóF'Ô+ЙÛûNù¿ª>Il„:í!%Ô/’csØwްo,¯WípI\òv.£[â¬@i( _H"ªð0Û9‚Ög|è¨,èQÈkÁ[aýÉf,'qHõ°ÇyøÑq8Õ‚½£"Nà(ùlvHСóœO(Èü–ñ@Ëùty÷í«óÆg¦mh<€…xC„"–wíHf<éžÐ+øvT@™ðp<œØYž &è"Ÿó‡g©f°Ö­·!êse¾7 N;³¹“‚Tò|ÈÛ!7²Šo˜ÜÅr3Í… M0Åu˜¦{Ø/í`òÁ$×­§9é\1ì}½`&Þ‰á”';‚„Òþ§}˜Œr^óÝgà0.û -¸ér€+B¹B¡kØyö ¨//ì=ÀxÒŽ¯ý6òd‘2‘wȇª=E^¦7FÁß$~ @g¼OÉצ¿˜øüJæ=ÏnÙÁ^$Ó5NŒï§ŒwEÖòÁ«þžæó>Ic÷òâø¯jM\ÇÖõÊ‘ïäT¦yb–ËaÛщ öøDù„軳ì›|Iž(¸º?o‘ðÛb?÷ÑÖüÝåïT o^ó³Vë#iÃvqȨeÔ!³2i­MZžÞöûÊ£²&‡ rÍ£šS>cp‚þFý†>”BÇonÙõ³˜¥]{\ƒZÓD.8@&Џ¥â•ÓHÚÍý¿ßãL/…g,…q.Œô_M¼sÇüÆl\ ô¦_Tè‚4Ùˆè¼?½ø2þröâã Ƚ¿|³þ‚Öšõ wpOáÁ6í¶ÂÝìÛîÉ»£ò©~+í0áú\ºúg—Ô~¦5¾Õß^;ÕÍÍ&]‚©y!ÐéÀ×…ˆóWWD$‡/Â'ÈÀÃ5ÀG&ß•ôwˆrŽ”M¹O +úQßõtÚ Ž×ÏväuoOõžâ ÉOWq¤Ê€VÖtLjWÏÙ*®ÚÎÀÖØw¢§]uðá¸5ù¶L¦|0·Íñ$Yy“Ësò*Û±8/’r® º®X”³²h+8d/Q°swo~¸ï¾Öï[Ðü øƒÀíÖðÎã;dA’·6X¦ƒ9#®Ñrh^<¡ÌÚ|T<®ú©X7¦v¸µÿTæ‹Å îî•ì­ôÍô øÇí%( °$F–ËBÆtc¯1ù܈ְÆ7x66]—‘Ž©×xÇ”ceàÁŠusÌÆ´V®îðªðjv]—œ0‚srXfØ1"·oF•\•²•’+y•âVÞGûÝû¸—J–âT‰­òFYºBGñrÉøõYä¹ù"tI¶Ì·0gÑ(—è¹=rkkúÍK8‹#ì4M|íÕ5á…/n_tæ!Ì/€¶)9Ý(רç£=ðÊvݖަɓ­mÎuj"—âjbuÎQŠÔêk?u ö¤sê: *µ^Y•BçO-¨“O6GYºß=Y܈JDç¼.òiA«ÿïh˜_©ôÕ ØåÜQêœ÷Ò™‹›ÄC^ ”7Vwl_l / G+ލmA¢i™òi C3%Z£¦2"gÄê,ÏèÑÌ7DÕ²ŠGº7žÓ¯Vêèü÷™+þ³îð J#$Ü„/^RÓK¬ ÕÝ. A]žë¼<« ¹LÀÔ!Óì,+¹èÕ§NØ¢8Ï+çAt?Ø.hOÄ›<'pÆ §Ü¹X»?moL˵ó8g¤Oì–¢ú/XS¼&1ßù‚üºÉÖYê.NûùŒ÷§š žxƒÚù¦vÀ˜®øQÊm5倞ªø{Úµùß¿÷"‡Ú¡NlT[( ÏFÝ`Z Kû<Ûý"[É›ýÞÉ×ö'Í@z`¥§\ò)¨h±G†ó¤'¶''B“c){+¿â+ÀµU<ÉÄî[Ϙÿ¸Òíðûië‚®$½(‚)Uù¡¯Œ65Èxd…ƒöƒøíê€r¯&`‡¨ÓŠ®òxŒÅÎs¥ ¢åݤö9déf×õ€ž¨G³ú/…ÎïáÔö!Í-{‘ý=ñ²îCš¯ù¼ Þ5*ò/ÚÎáõ9NÉw<Ü#nýY¸üëKÔM•”ê…nð€ ßàï ðŒßA „Õ 7Òh“‡¦L銑Ä×éRpnœ;B:W ¿žz@7ý™&àÒ›å9 ‹ÛÇ×Rà‡½æÚK´ßøð„\XàÙ)Ý^ù»-ƒ–è)a;ï2hð]¬&÷h®å¥,ŸjmÙ¾ÿľ¥0¹J?| æp&R¶}FIßÁòûä;Ò“çÊ_Dµ–¿ ö mäV§×ݾ6÷¨ºš,Êx@ú,CïüÒ?¼’ ¼ÓQùòÄÞ …š".;N[üSK\òV_ K õŠ£®ço¢»v6Ó‰¬©I„è©LÁ”È~ßÈ´À¯[5ÙÚÆ|œóZlë>*Èë:È{(GäÐÚ i#4zT¤ÿ‘ ^09§¿JncŸº9d p0:'Hž‘#ü‹’@ƒçœ:€ÙÝb„þ’šX‘«–æ®_%áµÑWãH0Ð-áœßÞS¦i«ÓMý+-!þ|"ˆrà»7&Îì$eA X:`4K“ŽF\¯gB;û„ÜŸO9¨»–ÂfJØ«×Ïp¿ÚLäyÖ²¨[Í^õ (³L²è!·&ç,s2ÅGžé"Ó]¯Ù»|³‡Â'C¼Õ¿¨?é¢"†²Ëì._‘7EK½BvÏ4™Á#@׌¶ª¶Ü•¤ÏªÔOãsn"ß&+ƒcð‡úætäÇ PcE ëô5àƒ|(Ê5âÍ M -zùæY–ºÇÕœ½¿Š°Yž¡QÃéúŒé‡Êó—’3¹Â9û €©§.¢5a±ç²—WqÏúüKòæœò¾ùG¶z¬ü¡pÙ!G“¯€£¯³³<¿8gÝŸ«ëÂqµ@6&òŒCß2žC…½¦Ch2c¼UÛLO˜Ó?^Œ«C…æZßËüâÄ4ÓƒUÄ`heòÎÍE„:¹E…ƒ^æu@¬óžgÄ£Ì2Mz` gÄE,Ͱ¾ûx¨¡ Rû9ä£Sx&öJ•/ÄUˆLð‹ÌQÏ—"J,•&J1^¸Ú`Í Ür"µäoº/Ò/š·Î/NÒ-” è_Œ„ƒ}"wC M)æÀ¹žýËþþþ 0ˆØªioWM6o“ü¨%‡œ‘WNϺǧ•ß²0mï±;0­}$8-Rܷ㼦mÍÔ‚ï³é·d6{Ò Ü¥qû>8Úz6HW¬Fd›.Çhîè3oíG/–H>t:jº’GO²)Ý4_%O¶è÷Co–ÀÊÊÊô¨Ò3&Z»Ï>áö:”Ý ¹îò…ðy½ïy>å¸éºº™ÌñÚÔí}͉t³¿Ìж Ÿð‰“¡AuþëQ°e²köhXç'8Æ#ãö"”¸†P¸‘|,nºxŸ6‰I1h™Žß+xCýî'/tÅ¢hD“ÉlïJ}´o§´Óò÷tß%ö¹œ÷{ëJýÓ¯6 ÂÿêŠï/ëK£^ï6hC ¿Ñ^lé¶.䉡¸RáÀl«¶®ª¼.•vå Fwz» @%Ì}„úóB™…ÞÛ;Z ëÊJ­…\¶ÿFÌ<%Gl&LpR2Óª¨J #4£ŸkÄ;ÇXYÚ:N]“c3äLšåaW;dò´þ§ÀX¼®½­ÝïÞR»{mÕäfí¯ÃÝd~è“.œÌÌE2h±|¢%õj-kÙßn3½ªaîŽÄ}Të ê‹êfWý(2{¢·#®j{3±¡_Lçû./R~Ž´p&²{†³žÅ{ÕJeŒðŠêž0eÛd)‘¦ ¤7/ªGŸÙjY[±f¿ð%C=™.ˆLÆilÙ*I+ihâRX%ˆJõAKôhp bò¸ôL&Û:Ž4¿T+ÚQm†¿01ikåç-gðÀ¶–Àpª•b2èyÖ?ÒÍ<àéDÐÊøÚ :Tj§sžô2îyº~×@ÕdÂ4S{½ð"Hdz€ØMAæbé:i3øõi‡€°&êø‡ÜqéYq8ˆTmâfO™V?HO}¸ôëÄIôâaÎãp„'Úm#Uú”?þÈŸQ[ã{>2ÃÇ´Ÿ/.†y2I¤° g³ÒZLÇõë€c $ÔMìâÃ\½½) ¨lŸlØù/þµàCjþ…=²qw´•K–ÅM!+0VÆbÔþÝLÎù»"ve6hm­ÑnÆ ]*$ åÿABŒ¯y€_!PÞ§D¼;ì5Ž­æ!¬m“,ÝßÒÌ–£Éw¶ø‰ãC+w·\2P¼ÌÓŠFýùc™âI­ù½sõjsEç¾Õ>«íz³¡©ñð4Üv†…a˜3I1ð$S¡\\”¡aÂnN1‰t(ã…¬IÖsŽ8¿z`è¦Ûyk³Û&‹ðºüa¨7áºPúP“ (ºÄš”ñ*<„!Pôç^¼Ç,Cå ÀyÖ[ÏUý9Ôšal²Œt‘)üÄCŸÚal ê¸6±½¨×ýWðx0 ßz¬™N76ˆy<‰n—Cöˆì}–Ù—”ÆàÂÊœ:‘Íî.ÒO«ËCÅÅ&X¨V£VQÒVZ9uáÏŬõ ~ Œc‘ø‘¨ü)Ù@ØœV±¡QÄìX-Òd™téuú† @¡ÒëKšü,4•Rè!hôq5NF‹M I¦„Ý BR0î‡ÌLEËL€üoÿÚq3¢pàê‹/þuÿgÂ=¡F„†-9‚ýim÷Km”Ðü…I úE—©VBŒï#Ká¦ÄÀRSQ™pUÉ7uÖ侚@v÷(õçf@mð®Öô6;É?¡V¨ý,™Âw+&‰éi»¬x~gò¼Â2¡Öì¤3x’Y°Ÿë;þnDZo‘¥ªZí'_‘VZ.ÓV.å‰"L³ÚÉÚœ™ÝCÖã5úxèšÜí;ëtÏÉBÎkÛá·¸5/ºŒ•·Õ~ É{g†m80<»ƒÛ 5W¶ÂvùnŽ~øëÓ×'/À¯Žª`x¬¼#å¶ 2èØ€±ð`ÒŸt²o8ŸóÈ£F*Õ®tÀ`Ïåú°™å8Y€1|ýƒÕ©±ß`ù 7¥H™±bà;"iÖl÷§ Eçl ¯ 3€ò±ë ¢dí MMúWœ8«G®úbFïÉà äæÚ¡ˆã½ã;vȈ*èÑ Š@4`EFˆ,Úb…Waí­K·Ó§™­ÇHÆ«[8ÁJ6Ì TšÕ©6)_3 äa¶&òœþØ£¶Õ(«tðž([°+WTîÃFâ'h¾V–ëÆÁÆ·sAÞ#ÛEM͘¨»Võ2R¬3t:G¿½b¼=—ðOOå™fïÉ1¨sý|³÷|MÏKÀ•B„>g»RVü¼ï­:¸“h³ç²u^Õ¾™6‚ì1YœÊPD¬Éýñ½'‡ôÏÙ†Ë!tâ2ƒÁÊÑ/7o#ºÌ¯§Þ¼he–â_ÂBòIHO&TuÀˆ@²à.a¢ p ›õ&ÀªP¢p¾)R©],äï3™”#PÎq¢z,Û¯ìC*ë{Sú§Ý(ý»•…mW†mã)ê7BBUR‚–³£Êð£þ¿à%\óoz‹µx*”<Ù¤qgòQ³¥ý*?æßÃ#ÁÁ„¤ÅE·zæ¤gI‘é$Z}“®H/=íeÈìá Åd–HJWã$m÷{­ÙÞèñòÆç.ZGZ×È*úò'Cl<ƒX )˜•¦rÓPgRpH &äKG`4MôÄ5:½D|T²UuÓu‚úò­"=œ(ÁgfC\~{MNÄLÞ±›Ò^V<Õ ––— §R9F€¶Ï[œ“~Kƒ†˜x9ÈÙ%)¯YU}C‚V{çë,5~$±š/0]³†ëÿ„,¢@©1/+ØHW´> ‡íÆR\Fgg ©+ÂMz½J¢U+NÁg‰WÕamë Üý ªNUsBÓõpK70yÙŸI/]íÎŽ)Ÿ)ö׼㠨žš…<ÆLŠ•;{“ÑÐÅ(kZ.† ”’¼r‹’ r*#¤ ^$Ïes Õ5æ1ò˜r£Ÿð'²*8°c[;#C¨gJC:x ¸òÏ[®–ûòI/[œ{‹¨žÀÏÈøw`æ 4óÿ‚.xÂøûÍvb*7=ÛwMm„=©×%ð´Ã‰2èeÅ=2dÈ@ ú—øÌGzé+¥ò–VuǰŽ{"÷Æ`4sИVÕ•u\—}¸®¸>Ú¸n¥~ßh<<›rû³®ðöHÖ° b:‹ü‹Ì'÷gÎÅ32cÉÄ{ŠîåCñð~J”+žDk pÄ?fJ?û—0÷|0>vCZR‹\k¬>´'6Ü“ÜkN´çÄ;:“/Ê™ÂE³Z^v&ÃËq$aÆ{‚…©WSw·ù÷f=ÓuaîtVêÀS$åmËôYûÆ/´7û a½`e¶»‚[¶‰0÷•Üaê:sîQ=Q¡d3Éåè eŒÊ,*I6®N]× ¼~QæD–UÇ΃˜€Zd 9iIéÒÂnÒã®apß ¯‘Rû¦¥(j}U3ôÎåj‚JîÃ*/OZÚy3– 3f>ìéÙjZ÷¢£pDù†¦¢ö£G¶¤-å[Ù”¯¶#>Éɧ@’6£péÃUb–BLBõõÂv5ÇêÚéï < Ÿâù Üé¼fu–ªIôýBzCW¯qÈ’7•%ÔX&Z´ Úæe–½ŽIç±ç$Iïç’K ´K‡pRÚ}7–B¶ÓÇ÷û¯ /a!›nŽ©Áçs˜ ƒ kkEà? DÛ~áHCÞ,Qj‹:ƒKÜàö‹B.©‹ Š\§ý9ä”§émÇÁH@®IRù J-ä›ÉÅžæÃ¤«Ä´1©Ó‹B¥¡»–‡cåí4ž¥rÖÕÍŒ˜¡>ˆ°%íæ6'O6€§UùIÉFyÉv7UG9«vË{¢ršÈ¥E§›±˜Z:hçÊum”Te‡jÁWÑoƒbÆCGoqsvøjÃ)Ÿð©ÆOúÚ ‡ñŸÈrgÃOnJB6ä"¸½HL/͘©u»‹Â3p7PiLaw¶ ü' ê$"Ò“cµ S9‡³:º¦ªÆ&Hš3èžtK‹1˜xz²ah4 %¾o€Z_j6ßÇ¢ÑnÈgŸ5´]x3—ÚJŒ†Q¤…&l²¿ÀUµW+>–uG’'WS“GptŠÝ[½1žzài„-ï >èKÙVOB_-k)Ú8^7# 7œnâ׸ò]±œ:­Ë+šÿªU.iïkYÓêö×îÕ®j%»–¾ø€]<DÌ1ö2¬´Hà4,a‚Z°¤óZZG›UW!­‡c"6[üí¤’W  ZN.K7B)Ø6!нµÞyЄµÍÒ£º"Î$Á¤@ÝÆTeé:AybèjNI*€-CQÙC¡ÒRNtö¬ï\Q ®ŒÐ¦` TìÂ]Vã¯Úú2>òÇ5:Ùxc´Ò¡3L©¾s,Š/ŒÆfuäµK¬RhF ¦GIì;ÎÕC,àCœ&»¼ŸË ;9"}_Èx{°0Õ|ˆ=ez÷Ì *"ú{WZQ®"ÓÚ/ 2cºÂ[v"#½hÂÞ=úµÌ4o17ûÆŠœ•˜:•XòE?ŒÛêq3öôK‘õanZ^AG kiAžA^…Öxˆ­9HQоêIG´F8­ö€I£Æ¿T¥\DrÒòçºô2ăUûe¯x´•YÇí=f¿’Ž“c§¥¢ÂX[ÝYÒŸ~ãºvëûùò†ÑÓ¥^÷œn‚BsÁùô8a ž#\«ùœÓW~Þ=¸>Ù†aÙÍ]~»{•W—‰óýØÿ§“ôò÷ÝJW£A_¹{óÀ1Ž ƒé•Rj>Aؾ¾ÆX k°.l8m¾1T®Ïàe×ø&7§DTøߪ—5XuÈ {HêÚê›÷+Éáô³3,lt6G»x x.a{¥m¤JÐÂh_9+£±Œñ@ †@Gz‡ñ ÈÎ^ž+)$"G¯?Y¦y¸¢ACi¼Vµ\Z‹v…'}…ðœ‚4…Z:N¸Ø†‚Sg5\˜F8P8‚ð}Ý5g²CÛ †&„bÐ>paV‹‹Øç]³µw…¿„NuüJ&‡·–d”¼ïXX>’x‰£Ù Ò®(ºÎHÃW Vù‰Ä[`F6ì¨ÓëÆŽù Ú(ôQ»}ý š®ëÔ(¸!ÏìÔŸ„ ÞÅP¬¡»¦»§;¾û=aSç5ïÕ±¿ÿª{¾Òj̶~û¢cÛÕsofî»%.4f­©u¡»¹Ï«Zÿâ5·áËåÏyº”cÖuZïlÀíPhV°4“oA,Å=‹0 ÕìöÏÜÉ㯵¨£·ý{ûæD@øDXZÕ&§çnD¶É׌ˋñÃÒCŠà×î®ûa.-„ÑŽ«ýñÛïBÇQäßZ…×>õ^èZ$EÞ‡/ÛA]Þ’º‚úªâ€'¡"hÓä}3ÙÜÛe3ߟÛë°n[·ÝWLP*ú®äÙ=k­0(ªiÇõAò*Ås`:$p#HK|øÊ@€î½fà¯d6lð[ w÷hµØ.u@r²IÃɉ‡¼Æ È.ÞÌÕK¡¶©Ìáv¤ädÆ´ð nu•HÐüH¸Ôp&K§Ñ'²§(ÞÆ€œàF/ 9‹ë˜¿Á±³á‹‰]ÁOÃ2#i®áí“T¤÷#¿ Û¦:„¥cëØI=«"±ç6IZGæ_qYì• |¨ÊòæcPÆ€µtâ5#°·Ç„H¨»ÑAуµùÒñ@5›æÈ2Цå[ì®`S>ÖÝâÆÇkaÛ§=D…Œ¤šg¿¶YcÜt6­òËõ¬crügÈC}§t0[2¿jŽÀR_‰ÿ€im_=ñF ñš]9 ®† ¦É=Ÿ6j¶¤…†OÑÀß–¬ƒõpõJçÎ`UFÛ¯Y"¯…¨æ+ ¤W§krgr1ººy¬zòð=¥90:X¡”6B°`´V¶T¶bÙDò…áNyÍ[©GÛØ(`,uMܸܹÄ|áV¨nHåa»¿\eÉ-!DzwX§§ÛΟ1rZD£Z ‰ ½1ùr/Z3bÆmäàˆ32=‰<åÍb{ ‚ÉË+T0Æ÷tŸÍs S?SˆhÊŒO1oºŒð†«Ï—OM£4 ­Ãرyîà OÚmº ¡ ìýzfj)-L—šnÈ'øèâ ·Û±8ÂG—Œ#QRä±SU'²Ô“_|dEü‹aý=à@Ánd^VÂ((“¦øMøã–/ {;WÉdñÿw–_æ\ägòJëÂàmV˜cØõVCA)ÊP¦ñ–4Ó“ÕJ7‡\'¢^ÄÐÅÝa‚pªµ·cØ×NÔ´7Œ§•â”i¿Îá|?}¹OÉvKC·Y¤*öìy§KsLœ,"nxå‰f sˆnëdh¥È”­±ó`•è=öwÖWà;3ÀwÈ™£ °¶Š'3ME6¡=£½áNàKëgöñwÏ“æ.çöÖ¿MCÅŒp¿JŽ|“Ø­ÙßOýn° .&?MN-)t GUPCS…{eC¡s{äh˜Ù.GŒ~€ê¦V%$ŸÐ‘5wÚ-Múæ A¼êOã–ˆ2Í·BŸçôsfl:˜‹$­N¨¡Ý9§´7‡(.¬ìb!P³²AÎc(@®Téë‰Å¹ÍmñlŸ°Ê8Ž 6˜ªòèÁÚ-d<5¸u? ±67/ÇÚÁYÁ û“˜®­W·_ø‘(Põ»êé¯B5¡vuÜücˆÆñk*j긋öE*vGCÚO€Ø_ëú²óö€•µÌžV—»MSÙ™ªÎºCs3CØ®Ÿï~,Snû߈cU„ÔÔæ+ÝúwØ5·4›[L(0ªõþŸ ~·A¶–mÑ+Œš&Ýtëʽ‡o¬– é+¯ë- ª¿¯øm—Ú(°æö•/>»Sý©wXo­Q¥ýïR9×Ü=šÙ»ÕÛÕ’ï6Óo&#lÖ‰S‡-D=£³1Á4à›Lð–œ12éûì¼áäèW³u[5®{=_dªk_jåΦI­" ÞaP(„cWè ½ØpCÒCl­Å=ŠªRîÅ܉É2ØefÊ.-š¹ˆ}À9%lëÏêÞ•"p±JCÛ,>†Ú¬ËÂ4¨¼áWD ,-È¥líDiu¿ Àè¼×Šv!ô…}Öd³®zŽŠé¸Ôê‹Ô+8ßø '–…3¯>¶Ðg¾úuÙŸ†i’a,;-¶oN:k¾Ì¢ìZ†—õ$¾JÔfÁ_& §KRÓVÓ_£%jYñÆ!ûª¤GhëìcsïAgÑ¢ZÓl÷q'¤°ðXz‡ºGÖÑ™&Í™˜ª9TІP¾×RS-'i¢ sW«À-!ßÐ,ݪ'N ƒÞ㈊ˆ"ø¯gWf¸5¹hø”j«¢f•<‡ôë.Ñÿ¨éË™³:êØ¦Ìš uÍ:ÇP“ÝŽÁÜçëñOöU¹ÌŒl¾ÃãAƒ> zQ£%ïòG–ê)%Îúg­{å³Aا¬Ÿ=𨛠²sGd«K¤“ÕÓ]óì²…;þU%GvZs&ƒÜ¥5zA=´›~ÅùaÑ #spVc ‡+–¦¸±·4¹BÂQ§1¡»5îòôâÕñéNJM¥â&;hWlu”=¡tؾ¸ÕC}Ÿ×ð‚—ÑòIT‚)Rd*•ɵÉ|rÍðxÊgÅlØžÞ:_¬IWêT|Æ×Ïd·Ø´¶B= Fm¿¦/ ëÉ=IÓŠ´™IyhæÕÊ&èuÊsV±þ½»,¸ý®ØÉ¥;¨Uµm/ødSm W¯Ý›æÒÈVdºLÎÍô¶;¦»¨àΉ#Å6‘|qžöyÅPåG€l—šÁ´ÒyÀ÷Îh9›Ç8ÛIiç;³<—ß“oR(6%³)Ck Ö¹d…Ö’¼: duIí@ëþ)+7Ô44“™+¨·ÇJ§âÖ´#PHWeX‹¨R“?³tnéÐðŠÅùع/÷>bK‹*æ%ÌCÍ-s®r̪4X&…©Ua¯_<<È^à«:R†™á"=­AqM“OÙŸ°`td•“`?×6c0š`Ô/’³T›Œ×%¥8úÛæá‘eGŽ¡bÒ ^­RÞ·óÈfS5G¡9UÕ>ýŠªþýŒð)Tƒ0J© nŒ´|‰˜$(Kϵ},ê¸W$Ûÿ~Ç~xÿ1ƒ VTZ[Ú7:Õ&Ûší"M‡ vBÕñ')ìFqÿÃþÞ·'¯–tÃ,¯î¼žWzP½Á‚a\ÊnOb“Ò\ŸlxwÑî3޹ÚÎh¬’”‡šãsÎï—ùÓªxºÏ7FSëÝUKAC.<ò‘ŠÒÄ7møë B¯„»…ÝkÿfÝÓîë1¬j¿®âj"ýOA¯è¡ÓÁlì·Aeû‡v´¦Po¤Ãí­D挓q Ä ·_er¯¥øhùwcôYqÔÌ}%¥}ÊÇP=[žsœJ£NSNù_wÈ Šy¤óÀ†wtᣳÎ? L{ s)Õ=Ç‚ÛÜhÂú1Ís+}øMÁ»©ä@ŒÌ‘¾C[Åärܵ^jÏ)Ó‹Ú¹=KvkC'ì©–ájÝ+hVœšõÕk¸ùIòLël ;m#ó¤k³>“ìMñå­”´)o‚¬²Ž)7s¢–Ô8ŽÁ’ú¨Ïó·r IeØ^r!ÉcÁdÆjÁ÷ö*…©w`ŒU'§ŒUªL·“u2î´²!ÏäÅïÅÐXr­.ÁÐXʱ½$ƒ¤N­Ö ží·ô"&“ &q "!êJ&æñú ê5)ú$Œj7qú(j7©ú,^¢!,äƒDƒJ¶Ú„ƒœ'—ÅŒU¢É/[Á¶')k§æL šWº)úHí; uvÛG/u€Ã[òTùÚ§6Éú Ž® j ×”¯0ŒUü¯ÓÈ*Hím±ôcß<–yÛ{>Ú!(ä»ß´ÁÍ{Ò}0@ß=Ü0Bܹ%Ñ.Í“ôÓÕBzrtÇéÇVMÿÉ›¸ÿ“ÊdéE[½Œ×«}Åý'úC:ˆ¿«W è8jôOÞ$Ø9HÕÌÝ-™MZŸ§I³CŸ}Saÿq›rß׉+Xúïc!Ûx ŠÈm½è)é QŽ*E–€[N.Á`ߣ[°uPΗ¨ÇC&…U¨‹æÉ?îeN6ì«ñ#K~UòæÈý² [2iŒî”§ÇÜ” i„~FÑ5À=ìËÑC\ ?ïú¯{±È3D_áÆcôwd¤÷u~njÝÐT'ú0Ç¿€ÐO£ä<˜µk:¤×J|0ˆ?HŒR¹¥ôÉ’W¶ŒR+ÙBXóè˜Å—¢9åÈ’=ÀD–„öM˜ý>¢ÇÔ+B5 õ\ù=é†áóŠáFFÿXä]ãüN¦@%A˜==åσdé›íâ$­P &ßQüaT’Ò¼Ôû‹—tÚ—vÍ?Žã ;ÅîUÊÞ¡­ÓÙaF“äI>ó§É¹ïkò¬C/¿ûiЍr®Ý©y?x;ùsð‚ìÔ&”§˜À,}‘Mß«âì—î´§IóÇÔêžaõWPsp±¨Ñ"ˆwM°‹ Vþ“/­*€F"~¼ŽšsÐÿ˜€g?F¼é6p$?]¸v-ƃs [2]~\”ä@Õ`Ò ®{ËÚèñùŸ*I3Là­ú_D*qç´½ë|%k]µVÕ–8ÀŠD¢¤^3ŒÄ÷¸‚NÂÆ¡Évdß (3˜c¼}#¾m¤ùeÄ9åQJÿ%BZׂ`v«v¤ª¡W‹ãô£³žã9»`2.#?ñq¦ß±È«~éQ†Íu䬨ÐKÔ‰U’€Ç3éÞ¬„úåïj1:=á+tÖ±Pg™OtÉð{:W™,ž³&Úñ.~òøšzšv’ÅĦ{ûˆYÀ%»è?ØR¡Ža½ÈˆQ-Ré[§Þ ûÑ|‡˜5ºu±£¤ Ùûè’`TMÔׯË0â#ïý¥e¿kÀ2›@E(¬ÿªU}é˜'Q“•_VV@ŠÐ_Û Y&ëí°(1ÑÅô¶Çyp¦ÝÆ14è -\Í»©\Ú‰œxåZjcñÖ“î]ôåÕw–ñW:Nê—†Ç'±—&Kyè˼|s•^nO°bBqÊ"ý¿³šÇËaÎ7)ß¿¬ÝžÇc$Õºö&?Y-¢1mC)hï}Ã:: E6êš¼1º@=࣠¦î î™ßgÚ dâò“xÚwè›FãÔ1Ùx·þ³'IÈ’¼(xÚíª-ò쫳‡¦28Ì2Äåºoͯlã †xñÁÄ¥³A dWyÈ0 3/:G‹cZqÎö;±¶ºM´÷8ÚI‹ Ü)¾óÞ¾\Íb¶³Y’?,¥e«gßq²|L»^]Ø%®'Jô·3z°jœp‹±É_©¯D ,ë]nº‹†|Ç+@¿Ù¥gWò}› ™>ên¾ 2cƒÆmÜæmŠÄÅ‹7~ 0Šx‡n™˜¿aæµR±%Çþí1òé/Kô/‘L©‰û§«6v¾]Ñcr{Éþ0ìI¾™˜S’•{×ßÖ2žÁãØš9ŠÜÝ–ñæad’c1vÀzSPß ¡i0å<`ÊÐÎâ?îK{9iy­ÂAfH4†¨‚¶OÄFmØ7ðÇNJ”ÁX âF/¿:UŸ¹¢#ï=o¡m­ðpl10¤\œ‰B#9EsÜs|ô€Ò‚ßüvøË \stÐ/Î#A!1ñ1²ü‰ãA0@Þ…EÆjjëp‡”TPPøT1ÎÞÞÝ›ž"!ÁŽ ƒãK7F«#;ÚãúúÖ±v3}51?Æ@ÞÚªÞe²øÙʧ¤ŠƒFí júùq‹[›é˜è¸g¸ãßìGÃmMxæ¶“!€ #pÑrkig7b×È ¿Ç:ÁJž·wbŒÉWˆŸÁüÛMfçè„#&úÚ®®Rgg·ˆ Þƒã<<~¨šimÌu<Ë kL-mÌEœ/O/¶d†žø !ñ1?=Gèsq8¸g³h3×רb–œ Úã?zÖâ÷#9;úyÁq[õÕtQS“l,Ü„Nа¤\F˜œã-Eу¸«ÐO®ý1‰ž!`\ ‰å^ùX*k‹‹›ëDO1àžœ!Á5Gv·/?FªRð*1àõõáè%×@ àE>;Êá °À/¤I¸Êñ[¢½ÓÈðý‚¿Å„¿éÍùü€ 7=ùi â~ínoˆ­Å‹ çô,üâ ©7xæZ3Å <™A „¸ NOŽA˜½¢†ôÚqû.ùîÍÎÀ´S!Àfœ!²ûnægø1qH‰4ÊûÆÌÀ–“Szb~âuâk‡LæÆž”Ðq1€æ]^œŠ8õ¨§X69¨­°’ç—ÄL°1à[½‰é.¤^žãa^ýcl.¯/q0ƒEÁ>ÿ˜ËÞŠÅÖyëýaF2ÃXcUÊ]ÝLˆ•ôh°ÖÓ;¥'1S}‰‰ôüLLP)JÐ ’Ècn²ÌsTÚ ¸9ÃÂÍ3Óî ôÆ1T0\½1²(J½bYabdTC‰½]o$k‘41‚V"Ûaj¨¤ë± úB™«Ù¯&úë”KaϰðNv#þ‚ ]$´R³ÙÖ`nì½=’|¹y™ÄBu‰µjc†|‘’»¸¶¶ÕæR(oº$¿©Nø*Góg`”?oÄè¼ÆC²¦ÀǨÐÈè´Ïæ’8™äÑ/ËEêþÃOýÛo§ž'Y 1d8H}e`suô<[úÞܳÏ.؈n™_üX@l»/…韯(h  I‚6:˜ ºøs@D߉Uv$²)d•‘? T À ¼‹ëS´Bêw¶:7'©`$°$Fæ.È•øKP^ÎÀ^C#šÒQ!š_äßÑÖÅzmíþ/EfÚ£Ääcšg<ŠZWÕÚªsäØµã»°£Äð“ùž½<ÑÖçgØ×8g=ÚðíN~øå½4üòò©üF´‚/µuoÇ!Èþu˜Eã>3ÎÄrw0Št›î-&zÈ‘ ‰››`?A§½ÂM27üqâµr«H`[ýX~Tjz@Ÿ¾\†8ŒËÛx|4$îúüdo­ÖãïázÙ1ñ”uæB8ÀÉYÝkät”äåÒÈ©—,تCF þ><$8äwU554-”}ÒUÝèb•?”á,„,0Dc·ðžÏ­¯™!cå?Ëb»‚6by?HBŒX\Fy¶Dê òŽóò‚¼ïËb>lÍêÀ{¿œ5¢ýÍíÆÿ¸™7¥ ãõxZ‡*øHýFêÂÓp'ž ÿf“±#¾`’“ÀàæÑæ²ÿ¥(Þ2~…ÞgÌ]ò?Ë<áNÝs6y?,f7ÞŸžÆ{ÑÜ“çº3ÍÏ‘E'Ö©ÈÕ¶eò&~1˜¶Ž‘ùË{&s_‘MßÙ¢J^'¾z¢¶¨.‘§_ G@—©U¨U—}aüD2侘ïbL•È3AX²ìcÓ¤Èð¶kðÞ~!k©EÒœµZÁ{ðêOîüŽŒü 5|êVî6Oßžäqðy`zOÍšn%d›YQ£’åG˜¥ÏŒÓðJp.¤«ËRª¸°ßÏù-ÂÂ] œOåU ’>@FÈ.#ƒwÄËダÒI‹J EìºC^¤e ›D‰S}ö›.¯¥C8ŠÔ«ÖÿòÓÄ¡3ÏàŠ(þséë‰|4øó-בF=„›1—ÁÑç‹÷RÔÉà*c®ô÷n7gÌ.îü7Uó¬ é| XÊ®æ …ë+ïÔý¥”<Ìí¥Uî#Ï V‚.‰OÏ\ÞÎ}7(jìÒg®w&F`lݶ­Ð€ψN7⥲I\ãTõ<«ë€çóÏ'Gã¬ÀÕþ:¯‹iGZŽÞ-„ðË4DuÎÜ«IF#ßxÏÌÍ_Ö>N5™/âÍØƒ¸1R²¦‘ƒ@gPÄ«}‘¨›‚ö±f¤~¸úÏ8 ˜úT®$¸ÆÔÒë¯%¤«]ò O#TµÛ¦’ d$Å{¥ì”n4ÑõÞ?£¿‚ª\<åä"Î/Jàá%?°y»ñµýu¶O¨aç4ðÓšø­~­qOV˜%d3= ë©ób®U~£ }ý…ÈàÇÈxÆ|³>°ìý<]ƒsçA+É?µ62æyÙôéQVyÇ{ÂV üÞRI³jüÉÖ°ÎU`eÚ¸úf5€ênÅQéï ÉÖ½¯Ž“Råa£|Ñ*ÐÓשÎöÚ!‹ÝèÄ|‰”Àð{!ìÅl…þlˆ¬ Š4¡ BÑ ‘:Ö¿C¸@át1ZSîç©ö­¨ÊÙT‹m%#ý©ò І‘hˆÑμ9ðï{6G—òX•QRBAãí¯)ÆÒÉ< Tpä”ö/iRQÈù€È¦^åG(4g5î4|$µš1ÀMª°)òŽ ^„.ûÌ ¶ër h ˆp­Nº2X‚=·;ŒCÆ3à§@-LlžJéÏ™˜Ã&FV Ùñâ÷D¨vB**ÅÏ**ª¾ŽC7£ BÙ à÷Ç8<úðÓ Z‚5Ç+ê$ðKÀÄ1„ÒÃyèÎ{À u ÙÅÅÔ{ôñzئTág rkJm×®ðœ×éJ]Ÿ¼Q¾R¾9~Ñ`60gïUÛh@wÀ§²]…µx5_¬.³žoÈR™c<º]+«”{Õ¨®¨HÐeëÿòa²¤á+ã,3‰YÆÀ'Îò{b%FÅÄ^b/³?ðyQ›Ïhµ\3œ&ËØµ1‰ív8HnÕÕEÉ"wëôCÉäQ>Ö¼ÿ³i;‰;jO£Š¢¹àBç‰T¹·Î® Õ‘g¶ wñz®þPJ£ùD]Šö4%ð½ýaÿôÓ¦ÃQ“¡€£š­Ô]¨ÐR¶¡Xá]XÑ©L©ƒ£ºZ!œ|Ýáqb¬w±+d0Ç;§VÈ£‡WúØ´ûŽ_×*GîNà¢_£ ´óÏAB·"Ü ®¸%þà_È Ð½_}H!ƒ°}AÁ0‚/p<¸lh†Ãürx&eÕᮿå^e^á ïˆQ^á5߈SPðEørŸ„Þw:Ent€¸þ¼5Õ8áÿóŽih‹ýŽÙ‰íH&[ïÚq2üòØb@M#HÐ'2cÂ'1áW_Þ‰S8çàì‚ä€ û‰ÔRmL‘­+éé\zÿnˆ ‡Ë [ÈŽžþ{;9:”(Ê$dJˆ 6¢¡ø f8G®òä°î4Á°òØ‚ÞØ‰È˜A_7ÁöšCB¼|)Æ7~³×ÒÓ#óŸ ägñCÛ— êecK=«¿ˆÍ8C )ŠÊ&õÔöºã¾Ý‰~P§o ÞSÍ|p§§ x;DçÞ¬æÞøçÔJ Uòá™{…WÍ{„˜ƒ£jƒ£øÀ1zqw°yý}Cçöј€FvÁ¢~1èyQéy±éw°éuàékàÐWÇ”8ìu¡ÌsH8AÁî•ûßðîCîqû×(Ñ+ùG}ýó̇tóß«™é¼©õDlÞØr rsY‰®‘ gEETRg*÷ÉÔihÐ};8 …U:Úa!e;ޱɧ)ȦÙ5F¢àÃL¥ÞêB’BœvŒçªS!¥9oɓλ$|gK侘¶¯ƒ¾Zá"^¶!ÈwH†]ïtåÙ€÷øJ;{AÙä`ÓQ„ÓD󲤨V_ .Þä•ç²Dó}™²Á†öÁÝ9ä¸ãf ì{•¼x²G…ìe;¸@‡W¶Ù„î—^)ÈoÄNŸâßmD9œr6å‚RÏMàœàŠ3Ç¡ÏOØ‘ÎOÈ]J,}€…m;N1:pðFòÖÛ°Žå%¸¶Ý纕wİ&ò·Ñò8œ˜ôőҋfÂ°Ž£™÷<ê'ïô9#Óf¨€›2œt€#ŠO+·åÉ)8ú µrfÞ1íj¤tðSÇN¹ØâÂλDÿ„˺ž>Á;± îù—A»+SUäÿU_pŽ”<…ør­Çu^Äp&¸e˜íëú­J:-y‡‘yôô?ÿ5ydý‘N´«:Þýƒäsî (§[ ·ôØ;Î5Ü»…³÷ËGÿî×K'Ê—“qDö„»_–ùómÍí•õGŽ-{_=åo*݈>ª§ÓÆWS'©—…QÜìÿ–̲ï¯êqô}ª(í¡¨ŠHFíqèc×—ÞLðW vC\âí4Wè­´U(ÃøÍµ—èδ—'EñÛ@Þ=–Ä9®Šë"NuU>UÇì~¦àY ?âܤ:&•wÀ8ÊÞG8Þ}æDï ?òÈ'~ñPÚ‡øsñ…õEô…1¼mò~³îµâ&³ L)ƒ94f¶H¡¼92¦H sX}<Ÿ ÉnJk9»B2±ª\šáÞf¡n‰(ô ½ 欞*S%5]lMŽÒl6“‚6O ‡ï£~”­Õpåh)çü…õÀ›'V@¢Z«Ïdct¬¦já2Vjû÷¬vÄÂèL²{l ûäЯQ_Q„‘wÕí=ãEe¶¦ÂÑ Õêæ…õ¥;õÎ,6VêÃþ³§ª ׯ”e,ÖX¾+k:Ò¾¬Vçg7Ú÷Y€XF縉‘Â)…&ÞÏ´žG1®s3&d˜ez.–DÀ¹4(9À(ÔüýÁ¹-»¶AP8$bú gdóÈ ¥^ôðnØUÅ"Έ'ó~î< Tdq³ Þ$Q´)Åyf4.ìð>©m„‡6v@ðP¢á&ÏïôìòÏÝ¢_¢íDôÜsºh±¥“œú’µsmpFÈ›÷›!÷gÔYz̹Îöè“¿ ?ÞfîZ›”žAü²%>ñ”~ÆÉ @ìŽÊ,c×r¨ʰàÙòB´ÕbûQ])õß$‚XµK¨W ›òÎïÏë¼zžÛœ÷äs"âÄ$ÐÌ>þ±_häSàÂA V/€2,4õÓ/wL‚×%¿ûÁ¨±ø¯üiþÿ u66V–ÿÿ¿© ÿ×~5‚¶VFÿavÄü¿z±°°ÿ¯#Ž{±üß"Õÿ§#–ÿÚéˆõÿDª³þWHu¦ÿ©>ù/¤ºð«Wpê9! 0&ÁqðP\ò PO‘ذZõÛ§‡É‚Ê SHºgû;ïJ鯋ã×…èvuÇ&Ü¢Û#æHMwtbÉêcwA+®Ú°À&õ"´ug=yÁÍSÉ:A3Û!nÁÛ0Ö]wwÄòh=„³£ã h]£XA2hÇÞãRÇ©‹_šÒåF Ý%þ;–›.6¸pó¹3}»J í5J¡¸„¼’2†?˯¥ñxñbn5\!cé8+ÕUdÁ ŠÃD—ÀÎÃ,‡aMœã Ó‘=e¼kÉ,c„×Þ»èÑlâoxoâËT‘êä/ Ò‹x‡¯‡àɇIbßðW£éÝí;ÜX–®¾Éø;Š»Ãw!PW.2.É͇©°ànÜe>”€gôªÏmNÅÙ;¨ßl¡ÜUvÿyêÿ]™ý×\õÿlŠÄÌÌôïã_ÆGÿ2<ú·! ë¿sÿºþSÿÑÿ³$ÈÅ8þ™÷¯>++ë¿×øWì_¦JÿÓXé?›'±þ'¦:ëbª³þg¦:ëËTg`gúßÍ“¶=!Ô”V×ï8^·]h7ävÏþŠ4¡"U–Õ¯z®Â ÚÑ Lb"óI_ú¼Å‚C”';KZˆ®à‰Š1HM&\”L¨R͉¤Ý “?øS–nßß¼Õ˜ôV褬XÀ+(èóU”èƒÜ[é€>‚Ul£¼ÀmÖ6÷}Ñg®®©.]Ï]¥ÂG0 h•7¯²çj™t<÷¥Ë¾0[ ëQ™Ì^^d;cqê^¸ãfˆ© ÞéóÕáMZÑÂË'°hdïè==ÂÑ§Óø ôäëï#ÄËÏ3ÔKhy À[ÌéŠFI þRùrÎe~yÂÛ†z’è®W î–eûŸ†^öÙм¡GÁåé´Õˆ®ÖåtEŒ–!½Ì ê9cmLºÚ忉8jó×¾ê?AÌàסéòÛ¿õµçBž­c´ÕfÑÆ ³œ™qëi0o›Eÿqͦï¥É ]n'}»â—%b‹Â0½ôÿ…›‡*ÃÐå ¦2ãŠÍoìƒþÑÁlÃÚ– $é˱ꜷéÂu&Š «Ãŧßëûó“ãØ r–äJçÝÛ¡ùÆÉÁG¦qÈŽ 8¿þÃë/ |x¥<1tDðÀ»ƒO69ŽÒ$ÍoÄàŸØqŒß$è×2žÜ&®5ÅW™æ5à`ÏÀVógÆádêЯ±[Ë H^¢H°Þ¨&¨¹û«cÚ3YúÌÚê©G;Ï-#M´¤/QÙ`#üep&?#h‹§¬sžÒ¼ò5vÀ4©)¹ Ч†[$-ê2¹ £­bÃ)›×쯤ê<èà]4se¼è…98N2I<8: EÉ`Ë®ýšË®Öq³»}aÂM%)Ì_ë{ý^ÒâÃËÚÈÓˆø…®A…%}ƒe\ÔjˆÔžëýbyq¦ˆSJvf6QáÌNv0TÉÓ¸i=ŒGGéJ³J5˜uñ@Y| F±Ñz º0òâKƒY3£Ž5Çz2–9lÓ–‘ôÊ¿K—ñݾ%Õ˜¿tµË“ƒp{`’Õ`‘iòñ4r+èàŒ‰¬¿2ÛæÑ:)ñ+kˉ¬˜ÅZ=;Ž÷4Ž!DË&,H¬é(EzÃY ÷èŠë!Vì›FLh…ZV”¬š¨{FYðt`™HS†·Ã† ‚‚ÃϯUS¾7g’t$¹Z¹VØæ¸÷­6ºðh½½^Ò_ðbÅf“«x‘‹~X@Â}²‘ Íè\4N)~Ø=lSNÝ¥©á _i5%øðÈŒ¤ÖZkCùæ¨âbahS¤‘rËk†4šxžV—Ét6>Ç'Òóž·.½¿>¯·×Žù¾©g Ü4 dWm¸MyÉ9ýÍšh¦t@ã Ç¢d-c/šAqÎYÕ¿¥ÐÈœŸe.$&Çþx?iåG0˜8jÙea´¡#eÁç¦ÙÄHÆ2j^—B¡±æt:Ýà_“Ÿ[·núÊõöeX¿éı$ôÞ¤pã_ª)™bñXÅ…ÉØ. Ç„ :‡Fä ÁFCGs0{}Èñ¹§!ŒÊé60ÛÙ+9tœæìÐ"e/wX´õÑ™_j^\ÜtXtdÄïk‡ì*eîj˜Ww©4²ÚYÏmèaÐõ"!yu@I‰½ŽF=צ%/¬±æ·5Ù í‡d«ÿ6K¦Öh¶:o¤ŒV]JætØí—D ÁÚöŠó -Ök9‹\$)#ãvü©íKœÙEàyº£ÚaLžºõm_u@ëÛE‰`¢*^Þ'i˜™&ƒ¬—ü ºöQׄ’Y»×BßlÔý™w ÚpªA$kóó~üô4§Â>»=×Gxwpbw÷»@h«Ç}БTßë}x•Ô¨‹Ýx¦Ï-Ìú Îô•ÏÓnÌô­ 0wŸ|Ußê…ËXpz¿TP#¬\xu\p0œ‚ÐèI¸(\§×¢Ã¾5Néš[§Iæ7qCrwT…z§µ“Þw4 ‡ëgë ;£o8‚(Ag<ëmÔuF»ãTGƒóܪúÞ¿gd­€yþo©àúi7x6nqÞ$Rß‚œØ‚ Öœ‚fZ“˜Ö‘ÍP}cârŽýŽÆ6Y ; àÞ¡L¼ÛÖça•Òu¿×v£î>^”Õf­éùå²ÁGóûƒêƒâ;«‡l×íµûµ“Îç·ñÃj ^š[ȪŽÄ„Ð áI­S OÝ#š?.…7ÎIcìb’"ƒ0]¡ߘN7‰-ë™øå$(õ“tÁG²Ú؃†¡ùs‡¶ˆ`³~Ç_î,b‹ÐËI.S¼<ÿØê>(ÀC'vØ¢!ˆµ,z¬O 2BXvÛ›¶ñíÖæÔV !Ï_@ÛoÜ·ZÀè$4"(‹ÖãàOpÀ<¦ì9¨†4A =~ ¶ÆñÙ]:ü¥Ý]ºIä³Ð/gš &“ßEE¿\@fÔj’]>6xäÝácFm™Î Ö7âN¢yq_ zV25µ„HxéI©€•ªæ ¬ÁoÄo$ Ò´Çé÷w19µpL_nÄR¦×~C§¸Ê2Ùê0YŸ#ÿQËO=JÎ@¨»%¦¶Š{âoýý›`mýÛÃ"L%Ü?’<À) ‚©…yã-AvK%«Í꬇¬ß³†Ó!wküÕø¿ið4M°Ãj ‡jáçÁw "ÝÏ9ª˜*×ËÿzYnù9¡ …â}|Ù]&Îzý¬Ç®ÔzfÉŒŽþ|<ï\ZÔ°˜–Áð;YXU$`¦bîÆlØwÎÕØ²]Á™ÃYÃYÂÙÆÞéú}2ywµ÷´•>²Ê$ÞXïÔÍq¼—ÿj©w?º-H¾~\Ëî~\àéîEs¡ætô>¼m4&"¡èO 9Ït@† mé¸öÕŸ%ÊŠÚZa²µžÍ%$4áP»xôŠæ ²æ5¶„oÈe£RV‘*…0bI[—Š@&ç'A>ŽPÓ@›7‰ÒYæçLpROr¯‚;Á•ˆ,Ïõï€öÔÖ1Š.Z¤&¨`‰ëo8éW1Ëd’)AF0Ž—iX³C  Ùq"•àˆ„H 2¿Æyâ·ššX–&zMs»ç˜P؇Ӯ›R‘¶‹#Àb²Lu1XNAðd¿ò-”£{Ä<Ðd;ÈÝRzåÌôf“U*.EaRÐé0£–<’…ñ°Ý..Šðe[¥ÑޑӅ™zì¸6Šâ±úD­k‚¨#H„a溕šoÃṟÏÏò>m1½aìŸ]Ÿ[Çö¼)æŸW~ϮᤪùÎVbáÂÈëqF¼@±cÖ7 ag 1«LÝ™‘¢;é2ºíÊ-™5¾c€ø ™YE©™=å3Ô3¨Ï3è{IU¡GeÜS0”q¼)©Eª[Õ»d¸ÂD®`±”‘+¼Ì®Ž Ü·æáúýIU/@¡Bêî¿ Iâ"nMâ›úâ¤Ã}]›“C•”êË?'ØÈ%|ãzp‘÷<¿P†ëb÷ž\ÉÙMÂõ¶oHƲ5s|ý>ëf;ÁƒgÒ“é©uyþ 뮄Ö;“’­MkuiÚÃë4œV =Héû$ìÿÕÌ[lkh*a¢²ƒº#Ì)Öø³P±®ÉìFÔg!†þ¾Wmys†þuZ:Ë.šac›hÃ×Ýv€x`jGÜCã\ †Y²©P(ÈB,ÉNª¶Šˆ®Gÿ9<òWTX$(8XÐòî•IððŒÇVdL â¼`ƒS' x›¨S°ZZRrØHj‘”=Æ i*î'ØÆ0êNkFµó’ó»è õ»¬bÛy±¹6B»+jgˆBYͳðªdIüÚ ¯ù¶DÔeÞá‹øóf ¸ó¾g³v(¯ÒeO›n”/É j‰ €Ï‚[2Fâoˆ†%ùÜ{ ÓsÏñt\7Ï;ó1qU»ïŽmí×[êÓëÑÖG`è«·ôÏ;ú[1ï_‹ ç½Ô“Â2mx†ÇIZéa»¼X×O(ýïEp(TL§‰ù?–î…ºbÈ]2VÞ‘šÂg^21?ˆ\‹/”œÒMeêNÖfm ¤e1j@BN4 4ÀfR}éú¥¢Ð0¢ öVõCÒXûÎkMÌ;,Nî‹¥å™ÙUŒmJ@³’‚½–2¨jÄpXv.MæÛ%¢­Ö:å‡d `“›¹kYM%—‹²šŠœAQ޽ß×펽¾±>k’ž…„±¤d×|UÚÝwÜ–u¨²Y<¾\‰(+xÐÁ Öúq9€N4ý€œD¯AD‘Æiò)D N\<ÞÓ3y;&ü³LÌèØ©M6//l™¶¾YM¥ѽ:î‹õTí…Bbò™XúÂãZœÌܹóŽ­ hîà D¬›0UÊsç꽆s‘ e‰Äò°Æ`r·þäKžø8vøœßÖ`—»>ø9A^`…=1eÃyH0õ䨢PÄЩNñàBU ¾ÄăÇûÀÖ BèdÆc â#ZÖ=´³ôÛìš\ªl£ô³jmbÅg ¸K0“AÉ ×pNÏ!òvþ8mÅjÞU>Eô ªmªE¹%¹¥¾rvâÊʘ Ä5ª|sµó¡§Ç\|ú.>œ%ß|/¹¶8?Ãñƒ;Ç‚+×2 Ç•.ù/6%ã€sG sE<–™ Ê2DBÑH,W¤ásMñrgx0ùÊá–Ï:1žò4µÉÍP?$"5œ4—âãj“ÕI®.®¨ö›æl¯;Agr1kfÑËJ3$®¨¥1ÕýѼë| ØUøt³ªW‚8 *2c–Ùí œcl6t/Fp–¾QqcB#"mŸ]À|‚eJ ô(|Ä=ˆ§{$áډšÂ3›Ô7œ6€9Bò€­üðo:h•8À½,³Ô%Zcôé”ò|³ÅrG‡m÷ì»kq± 8Kv/xÇên€« ì£_×C É©š`õ÷ˆ™a±RÆ¿z[òTš!Æ€M½¥ÌºÂ¡ÃCÌlgË¿X È\¿ircÜè(¿i˜­uºPÈKc…GËM¥¥9#qã›]¢°‚O=±bëœDwbš_{UZ”??×ègò?÷úMíº©6#Tø2’¿WΚ¨èèx—ãÞÑFl™î&eB%Zƒ%UÀqØ›$ R–BX‘Âùqc XH]êÝ‘c2¼G E £ BFkÒ¹]?¡^\:–x>W8͉/­ÔGÕúdóód»i¾ÔLÍ#Z‚ûÕ½Ä줊°C   BÿÈÕŸWÜn¯W–·¸*ÊAÊ ¬U#SËÌ™):Q| "3ƒÈ"¬M†P, »G˜D¸DØDøD䃣ìæ‹é©¡î ~ò6ÁÝ"½Ä{ðšçvÉ6ÊvʶÊöÎÚã „ ʉqã-`/à.à,à1fý%»ß'æ ÈI’™ c!k&s&{&ƒq߉’ˆ FD<ÎÞæyÛ„‡QïÛ±-·EÀ¿©8O'é÷§ƒ¸e•&§H£7\99©ŽeeãûxÆÊõõùî°sö²¿ƒÐù€-`;ãbG‰<´·àÒXÇ[@qcòA¯íÏ$P­ :²§‚x];2^B}PLÔm¼—WÔìÚB]ÆìÄ$GÂÆªd ÕîNÍ_{êéÿ t xí>ð›’vs¯OswdzÄ(T[“çä•'×l5íˆÎdÓÎx8›R4U„]ù¹±ý}B­‡;7}…ªÍñùX›õ}ùÚßÔŒQ=`ǾQ‹x^ý…º<#Z’f“—¥~”x)qæ¾ã†–&‡´‚Za»„}k¯Ÿžð/k RqW½kD€€ŒgŽ{[Ã×¾² + }V*ê’7%#zö–ö]LI‹‹ÛyÝ(áK sÕùX&¹¥-ÿR~÷'u¹ÙÙîrýqÏV–¦;í -¦!v9«”ýúh"¨-µÞþÔSYqSÁr‘’ßù=®(ÅxuœW©ׂe@1mCDˆ#æõ-‰Ö'Ä„ìg.epëcÔ×ã„9w«A]- êrH¢åæGë #{dcËÎ 6cGEÃõû«¹”çÃ¿Ï ÂLK/-«æ*ÖèM‡{.üsÂrȵÃ+XŠK†ºûzÂè‘ó¦uï †–Íãù º>³ß;Ú©:¾Y—Ù$…çjcÂË#ê ƒ»ŠƒÚŠ·—À œpógÓJPöÜ®ó))=AØœh|LGÞ¿­ÊñèÙÛ)à•ç â‹qÚ'ädé6Ã1öi¯QeLbV@Ÿë¶É˜ŠY/\DHwE۔ăƫ^YöiŒ_ðI=ÌlãN¿Aì¼I®A-¶"Qá tŒÇâùüú¾ùÕbýþýþ0-äα$Ïú‚œ!‚3ºj;¬\Ãå†ÌþkéUP™WÌרקçå%úâQaÀ:žÛ×’x‡_•àÙ;D%±U÷“¯i²é–àkzN©q8¯GHRk‹î§|IÎmWìÄÕ©.UH•×{°=Ôj§.¿Ý}0dú¨+î{6§ºØñŽgOׄÛ–¥ës ­s×7öÞø)èë,ÏQ¦%Ø2–Ð0ÕŽ“JTXA¯¯AÉÏäŒeŽO¶ãªO‰dç³P»_ .N¦üuõiåÓbÔʼn_2¾Ák±¡¶Nl Ô!K¤p¼ÅM g¾„-Æâ)¯çíz;Š€|æúÃÍ("Z߸N²°gK¨³OÏÊ\-¨`Ÿ™=ŒÀebÕÖ5:+ñIժϷþËjhôŠïýø>w«ëâS c™¿ÖgC跆ܶ4š·«÷;l¹¢gVo™úáÀH3€”µKoÓdáWihúL¾ž¦h¹i/ñû€ÅÖ(¼¾ÌÁP7è{ý•¥R`Øs]ﬥËH71VMäK™r2l•¯¨{5Ï TÚ7˧íæ~”@ß&EÇ/HŒ¯ —=eîÇV ÚK'¼ud§æ…Dzç|ÀݺtfH Ñ&Oæ×ñ¾^Qõõé÷>M$òt_*hoÈ™¤ÍÒÂìɯ—Þù ,òTÅÞZàrŸ\@®ôQ¢ r¦0ÛŒµBlt)p¦Ï‚­ ­ 'FÕÏ ddNö:Èk³Íe&˜\$§f>ªržéL¶ óW+^…6Þiá¯Lä½Aæ[z“ÚG 6ulÜëxïrÑàhz»Ò£îô5žSqG¨óí…ðweû ÆàbÖû!ê¡;ßxíÒ­;í8yì7èÁÖ>ñôüßLOÌ€ïÏ£ì52.ÐIY¥Ñ¸|ùpNÒ:^QœHz—JCŒyRgêÀ—2C áÙJCšQ_¹Qе͘o'5£­0Î4–3¡e‘ògèv_-$5¾Êaô]¹QDH&µåú¨ ÒJC «»2CšF­æë::_™Š3{å\×XV¸Ò”[È…(C«.((Ž’KK¤Tö/ÊÝ6­S¼7£ŒK˜87+ äqK„/Véž8¿ö“7f?»/é\Ê›tXn“Ü`z^·ò|­HÏX”n*‰ÅÊ\êãCDu`i¤ŠÖSÁ<»„6Žèô*٫ñqg*³0/IK‰«÷x¹;´ß 8wžxž÷Éú²‘¼ÿj§|=ù|˜zGOÚ·[bx%7î@<¥,mZbeÅ7ìœaQU\=Kf‰KjÖ¯²R8ÏxÀYÙ¥²­i7×lð—¢›ÆÁ(Ë/hCE­Sñçpþ1ý£ì×½¡¼_2 9k?.TïÉ}™ŠsFë*lÝØÒ¿Ó÷Vî¤ÌÅ3Ao+'± 3yý<ó¶Ÿ•èaªÝ½9¯¶d†M ?­¼³¶wÄ ½WÔfAU…•x\BéjD¯x¨ÁðæQË `]¶—Î..(©½ª¦sý D“?dVÒv‰‹$0G«§Œò »ÅÁ¯9мWƒ¤¡s\ó~j21ò¡v„…c3|(£À} ÇI…¹¡:º¼|s,•éZ=8|t'eJúâðÊH–Ú)™Ì+ZæÌæGÀ½£l’Úáƒwǫȭì_¹U~ƒn¿Lù"ÐaÙÏt‘ŒéÚýX@#í<#Á/ñRÏòæÝ._fÏ¥•L{bõrjLJþG£s°¿øµ.luýÒ¿qÄ“‡ìØÎCKÒ‚ƒH²"ó¢‚Û éQÁÿ¾‡7†ÛUáÁ#{°’Ùóy¡ÌÓËëÉèÅ¢2y_Ìþ"ûJ7Õ^õŽ^™qóMpÂçô‰±wKûöŽüÞ{}Öì—üžÚíP¹Øßr©ŒÂU›¡+ZŒt€±µ2–å44 kXGLÀ„®Ö¥Lçà'¤¼\ÙNÏiv2îòÕ>tAíRN=/Kó ›Ñ§Õ˜çÆu{ËiJÎ¥8Ã! O/»Û Ü" ô^íòf’`w·4>`Dг’ªÛJzRù!r2i¡ «#¤]§\AK™‹ŒM4|–:ØÝ+‘¬×Jc¾Š_‚jò„BØ]ÂdŸÛXÄG_&!«µ¸µI‰‚RþÁíèt@ÃGߥ¿*ëk \õŠŠšRòÁô$`¡°GY89Ws"nzŠf<tqªê+à«Á]\BÇG`0LŒ)ù`*¡áëJ$Dâ¾*á `A«/È êêQÂOâfJ Þ e–j<—³š‚5¹ß AA%!=‘‹©ÑHñ°ü/­<»£{ÒÖº•ãyÓT UoUS1›#i ñFº†sâ¹™Š¾Œ‰º|~B2T‘ Ê‘‡;|ã£))G:SéIÊ ’¾^û÷…VÉ êIŠ*1Eñpûú»Õ)ºñiÚø<Ϲ¿¦leu̯=êuìxõâÌÛ³€Áã z׳˜h}Â&ï’P’“žëÿyƒÌlâãÔãtÆÔGÉÈS9uãìPOãQªe“VC+=ç8†]*OŽLPÿ (ŒŒ°éµû×”Ù­°Ðÿªne» >›Ìß´ïý±ánFXí Ý[„×ítoÜÚƒ%— Žö R~ Õe$3„“¤€'²IQJéäæÞÉÇ^i[þ8­WbX[áXÔÙäö¶‚…¥ÔÕ¯ªtíO9~®£ˆèFP¹Ô¦ZNЪM©â"ÑSº9êÝ9/Œ8Ö8©²‘]óPYe'«ÙpõަäTÕü«Kk-—bw ¿¥ÝKŒa,”©š!ÇãO2wņ#„bƾµ˜×/-l¨§ 9˜zäoËã@ÈU! ^—kÀ‹!’>w¡:Ø]—:›eç:clçüçˆížÜnNÞ=y 9å6 ¢ky ¸‡ÅëywûýVÑ!:Á¾*o¾ïÚl®Ÿíþ­îÍî>êcg«ƒÝ˜Õb Ömàe¹°¸î$$p©wámûî3ç»·±oÁ×v’qí×¼æ…æÿjï/À¢ÚÞ?pTE@E¥a$¤c†aN)Eº†înPJ¤»%”)%¤»AB¤;î à‘ç|ë÷¿÷>÷¹gëì=³öZëíÏû®=›=Sl†lB˜Ê‡/¶3¶Ai4£·%ÞÝ[2àe‹võ ç\±a"ý¬x{º¹¹$óÇÞ<«ŸhÕMy‚ï7tJBÒøº‹1u‘¦_µ[X¨–в)ßäqÈÙcbÒcüÙmöÁ%¦à=&ÿ ÕLaŽ"Ààüw[ÌéŒ+›}Ïvîlç†ÅO–öaÇ9ÑÀdšk™öìÒ>'¾Zu µôeƒ~uŠùÂ^üé¥(Ùt^jXüË=e(‡ñwÉ,1|·‹òLÀ ë³ZÐ-W“ɽpƒí/ÇíyÔÔ°A¡ìšäË=° |ɸ¬ùKï£'6?/%›½“l¹óY„Ùz·ûE±Ê{YT¥£Ò”Mý–Fc•8aÈИõÁmúéÝôÕ;ŽíR×™»,M7–9îíÓ#_yØùÅõ’tÕŠ= eÔÛ¼8^ …œ¶ø°TF‚rSP/âS'†ü0‡ðú L]u®·ºÒøî¦€JÕîxŠš!ÒÚçÏ/г3”»WVÚƒÇÇ÷FaO=ú£@s%8%ÏèÕހ´˜R´^E·]…äo¨ìE¨Fîj¥IÏŒ|ŽPÀ{öreCÌåì(}Nñ[ö&dÊØ ¯ÒÙT´R†¸Z6É>ä!»ôVD&%»³>ÅfIP•d³½ø©p¬3uçgåiÜI}“´Ÿ­Uêì¶s&eïX?Lí)tÏw£ON:v£WÏ+­¬Û§0š,e’J€G¡¨úNаoU[u£)¬µ[ /¦1P~f”­ØÚEBlЖÒÁ?86‡‘¶e&ªtݵ—»¬q² íhÙóŠàÄùïYZ6±*1/¢ Œ4nÙ ?¾Ûº‰Vù\†8f(ªbX®¬c˜Þ//Í—ÐàcèÏTm—M”•Jì<âXÁ¥ÜsŽ·¾²{ÍU¨&«q5‰¶RÌT›•H Kœ~.ðRæö‰=Ë®uU$¡EÕ7Äÿö#ß[ê~nixH´ º¬»ó1•ÇTN ª¬eñ æìMŽXá)B\ Þ½º_¥¯Èy<'ÎdB‹ËëGwa¼ªldÊÙ¨?y¥ð †Bµy«[÷•âé÷fF¬™M¯ ’‚™7:hÝT´9<ú¸jß„u2ËÿLcÏp”h¯ëréP®óñç˜ÖàaPqj_ñG¿VŽÎÇÙdcM(oѶ}²Z^=~±x¡cQ>tÏoõ{‰\¨rËËÞÛªosÇTÑóñ¾È|Ï𱬢 yç/¢Ì÷@c´5½å{†j‹· mž9w¨Ö³0U®ðCÝ-i†«8Ÿ…ÇVêDñŠÄ|ßÛØi‘y Ó‰#i_Õ¶à¼cަ°×IÚ©ËK¢Õp)CËò°ØbÉ–ò8; £Tt¼†µÝv ƒÄýr…rö»+*hMÈÛûw?pOY*ä=v0w8™ÇFFÊjÖÖ -ÇLŸzÏÒ` ‘G3uc:ÜKYV<-×ÁlS¨Ü¸ñl^g«ÿÓëçýfÏ: Ïo“… ‹|Ä~£4Óy(vá­oódŽèæ&Œ>:½0ŸtÝã*©yÓ(üü#²e1÷ÝtÂÐMý µ‰ÅVµ© ší´FgÌìI¤‡ ‡Öꌑ,€iŒô²~;Ϭ¤lçòŒ#t–a{Ù¶WFˆ++±7­ÉF—8io.¸•énmºÅV0|UzóÍâ0¿²â¹¤'˜PãûNkͬ…¡ŒíWã î€ýôçtû¥ÄæŸïelcqçí·ž(´ÔÝ4«ž4”õï~{(Ѱa{rÓ1wŸö ×òyï;]:œ^¦yÆ geõ¶ªó½×;y(œe-=»÷Êý`Q¯—Ä"0BÑ¿%(5ï½ÑJ>\$|ÁyÛ¸|ÐYØYDþ«GÏ7Ÿêªô»·Š³}«fóW£¯¿""‰Ú_ZôÈà&Ù#Ú1z¬ÿÕIÜäS&‡$êš;ÑiNµ9mù ìHg‰ëå'X!±íýYî£11Epà`†#…š´­ïþÛŽxg\çÍ:2…ZômÓ‘pÊa£û^oH6¥(d*€€uÅt§®àÎ,IQ}ÁUµàý˜ÀÖÎúnÀj‘lݦTGçʆý`ï¸îÓ†49ÂŽ´¬Io×ý@ÉÃÛÔ/øÔHâ¨KÊ‚7Ľ sFc&uF§$)ä–Z½OARFB÷‚§a¯j²\vuoY¬2mÒVéfÇÏW ¦rA¶”&áV’ä8Ô%ËåV/GgìEG&‡Ètf•ïìX_³ ,Rlˆj–‰ŸÜèž‹ª¦LÊG/RZK[xv—ñ5P¶¥yÍØæ'\æ¨ñr}Õ|uUqå‹y†•Æe( ÛO©(ªj@A4Èœ‚C E+J*ªg¼Ü{#x¿š÷-[}ëAìT a­Ë&jß'pšjç;aòd¿î~Šng=èp’µ©CÒ#Igu"MX„[U<–Xð0(ð ˜CPd¯ÂìÝ0ï&§~à¦{¥ø$÷º#Ås—zbQ›½•‡A)*Á óTf…äs™+¤L^_΢|DÙãíÇ­B¬ÆC\äõjçÅ ¬â€Í¾I´ð%s¡v 0k½,’þK÷{¨ú()`¥2\÷Àæ`£D­ ”¼ü€À›:«ÎT ’ ˜Í ˜b ì¢=ÙCbUú¬„ɱn-kôËû ¢¢¡Os}kQ’ºpyð2å–S²âŠÔP·kÞ'Ž’—làî’ϦÊJ.Ô —Q5ð0 M @¥6O x¯¤Õ`þDÄ+è6*ÅÝÀ'È?êÉCê°R¼¡Vh¨ÞAÑOàVøÀ,Zòl8àÕªxé"Üz“Ô’*B¼!³ÞL_Ÿ:)98ð¶è’’¸755©{pÓ—¤äˆà;„” €ÙXÇú 9÷ éÏýX‹hjµÑ{×É*ºŸl 7É ýA±Zº)F7–°ŒXcÂCðÚ"²ž.ê¡úkaÌHtP¹Aƒw’´Ö( 1Êîé<õa×÷x<û™5Š¿,,°ásõ¤’ÖmŠ-t2lê†KȺlõní·:Üð´8(&„E&¨DâxkS…iêúÍøEð/¥J3ãû½¼»EŽ,D #WÀ!¥ ¡L ¼’"Í«%ãù¹þADÓ“'…Bo2aù”€`ŸKõ·d™•én¦iáRðS‹ züé"Å<®ú_Gº’l':,ýdÈ Ñ‡4ô]êmyd U›ñŠõz³Û¼"l°`·+m·osˆ¢5Vñ> ¶»s9€Çå:é½>Ó„/èQy¯¯S…7}™ìVq©”F»¸¬:¼³7KáL¶²5ׂi5Fé_Š#ô7ÂÔçfð ïmNŠOX‰à[7)˜ãC©^]»oáß;¼G•xÛ¦‡IG7|Ç>MýÕµü|-„…åwóúóµÂ'7Úòé[L,«m{ 3±751¥?¾íö×MÝà?ý€-€ž°2± ÌlwC7üÔÙºAà³7tŸüËû¹Ïÿr-ëÅ_®eaþ·÷s‰ûE1ayüÜG#)ò¼k÷X]ÎÛ?·Aè[‰A!·‹K):éï¬V¹Ö!;wyÝn›•¿‚s¾fj¾ðÖ‰uúþzûhíó…2èN#3m*É'{¼ÙÒÒ§¡iC¬{ó÷£ë6Ø+ÓJ I¦ô;ý(¨+kGêî+7uú-}ý¶ôw|àjJ­¬]?Œžs4Ic\잎Üpœé\gâŽ}ú­™Ä¯ ='E²yÈZãg\¼m¤rÇWáç–øÿѨ²ó«%§B }¾¾tWù½˜6>aߟ”Ð}¦ÍE’û–¯L[г|ÃÉéÙ½¬ØÁÝmßLëÂ!±ùŸòΨNºûwËt c Õ³‘;+:+…^ˆ9Û£vVÖª†Þå ?œXÇa“ß•SZÇ$;RœxÃÝb2«öŽàÀЗñðÎQ´û!Æüé ƒÏÙ3ïÆüýôíݰÈÒí b·Ã”%Á#´QéнÞÈuÛbÚ„ÊŒ|Xh Ï||ÑQ~òì0Õ¦‹„g‹ˆíêÑIDíu…Ý}”R‹Ž»áÐÿòtÿ#þËÛÅY~ß.ÎÊw»ÿý…öÿñqûù…ßîe½ðÛ½‹¿Ý ù÷¿Ý f‚0Ÿ ®…r{ÇgÝl[\ÓC–­1ÓÔסÈÚc þk–||—Ȱ´üãÜ&=”ݛN”%7Z11+^‚üõå!ÂÂ’s¯ eUß$ʪڿx>9òs`/=}±èç“ó>W¥Ÿ×ôB'ü_j ×þá~l>”ü[æô•‡Ûb¹èå„®ðõõ§*î /i¹HÈ% MÔJv7ù îSß‹ Éèc~21¬·º—ðŽ[ÿ:˜…Óf˜›/oé%Wj>OU-y'¸ƒ‡r?=_|XÑ«æ¾W_™ÿ>+ÆeKÍ—‡ÉW¡o]3°YDçÜ `©C²/)€YëaÍùÇyK©š?W’ÛzHòn°QÁ3€´JÆÝØ!º¦0^Ãx¼”ž`_½òJ¨K”½¼JY)“µ´fÖâyYEñ¶›ˆW.7µt`dhl@DHLPT˜ßáÑ&!ïÊBÿ›ñ™«Žob _ŽÍˆ( ?]¥Æ¢½¿™Î4#²Ž!QÇŒÿEí>„ݯ܎9ÒÕnùQ£Ë«%AŒük‹_Xƒ}”õ€‘®özAîKŸ1…z¸yÃâI„[¿+%îÅJ¬3³c‚¶±ý¤‘¨‹8IX»n{XÝB$<Šùx³íVVÈ6 (ë¬ûN–ÅîN/M}b=¥‰fîÍ|¸’´:s£ýöpÕO¬·b’I¥myòØÆ>íäÁn…™3ƒí˜R4ï¹%«£s—™¸ vÚ­gX­H6-‘¬wñ¼Õ‘ÞMиMÃ’¡z× Ëõ<\‹…Ø¢½F°8ÝYêßvÀ$`ëת-’Êî;è¯sZÝe÷qº9ŸRY³ÿl_uìý#…õºvœlNWëmèþ+)À¾ÙäÔLM{éÓv­øz\]äOwË]TÜ%X ñýz÷}öi£]´£Ñî8GÙ58äÆ>GúQm0¯^È©™C}KÈÛš˜ÄÅ=ÿðÙ|›¹ìʨû‘z²WY-g=ÑäÈ$ðˆ­í¡Ë¡ái«k›k?Û å]×ÌoVÜ+SÙ®Û¾:KÀhGïL{æ&M’¨xÛòÆé-G½'ƒpoÒMžÁ¸D^+¬ƒ‡p=€°{ÒÛјTaN89^q"˜ ²É âujižXWýþfoßzÿçö–Û û¢†+nÉGJ ÛÉé€Qá8¯Cb­º«i‰4ñ3Í€»~‰Óím{¥3/Öui¸ÛެÊÐÕ’¬Æè·‰÷-÷i÷ŸïSgÞÎ%gŽ‘©N7NÆž±ºQ5„”T‰RYWÿ|òŪÜ*”ÒLdèN¶ëóðùibÍ!á2wÉ–; ûŽNx$È-w®å¹¹WkVcW¿©Æ—*Ò™½#_&8”|PÐÀ-(•LÑA‚wµ¿û¹ fðöu‡«“‡€åïžvgO¼8Ý—7¸êô™~ÞVvÏXQv¤¨À÷\¦·ÞªÀä~MR%Y…¦~SÀ­D7À"ןÅ«_€5ˆÄßuù¹[™èȃºýð’¨f CZ&E>¥Þ†î`+ËùqžþÒâC†CCÔKšK,fÍÍ’ãÏ·çwövÊ÷y·ªáöÝ~!`§Š{ωQ‹AïumšWï¤~v7GêN^ú­´×àqÙñ•Ê‚E6bÝê¦^¯ì`˜¤{¯]G]$ùy¯LàÉÆ";Ê_+¯”ÆÃ¹\®ÇÙÔ,RV£ap§‚d¹t%/ýÚšq ÐùÊÍïÜØûŒ 3ôÊ~åûªÙ϶o®$Qò,ZtWЭüƒ)&ïcŒa®hÎðûoÌ;-ñI~ï´Ã¯¸Ž3qsF||üÞ¸x:Ëîµ±·éï k:.&SQmà#åã·×…×E&)Ë*o­wb \'|qû… ‰°ìþ.ž#_I/â!ψ)¿Ì|w×ॽöøZ!S`+Å;9ò–ÔΜì®qׄyu\pÞ½¾Æ=vt›þíÔg Wb×½¯tªÞî`%ORMê/ôͲ·Q Ì0lcîr3¹’e+—Ͻ+µÞ3óI`—þ»“{ÞŠ\1¦Í,õàDµšŸ13*eD›c™G1w_³$ÀÖ©7¿ßQ¾´7–ÿQ¨ïþ6Z%–SZz׈ñU¯CУ< Ù¶×N~\‚jêô˵or¥±¦Þª¸“EZ0$-óòE:ѦûÌ Þœè§ø0%Ì*'.´çè3^íŒmæ?Èv]›0Lˆ3ˆ_³Ð+Ge³£%v |Ð61óᣠv¬è'²ÃÀuECÜ)¸ânI=ü6rÓ!á6ãËŠ ¹Ä¸0´÷U¨.HžR×õzD—F]Fú^é|'ß}Îð$2ໞ.°}ãÕCfFö·’$uÍD©ÔZ •WÕïéÈ×]³ßI§²@n»é¼*(ÚŸ,'Ë\PL%/Ïåf„P>¤‘qÒƒ) þ“aØ”_êvʈ‹g?fÞËkÐk›'ÏJ¿e÷V=˜$õŒq+­gkíÝËþ±'ï犞UµÙ^º•ó ¿$ËFŸ»â¦Gð¬èÌ{ùÈ:»†?œ¨™ÈÚrÔ}M•¸?iÄ8ž~Š!, }l?˜˜rÙÙYA9¾0ùÐɸTc8^. §wR‹ä£¼¦§:s†KÛ­k¾ÝÀ×à4©»šÁ_­„^BXIè |°xʇ6™ü,s×8EtùñÏf•2Ò•Í̾¡O÷î½í_ôâ-Ï•+iÜ¥ÅɼôíÉò\Âw’-˜Hq£méjoèê<¹u‹Ö%©NL¡‚Šòy3VÄm½¢zÿK;R7Û l‰ýG¦¡p%ÇßõŸ-V×§cLºkŽô¨¡ñý"åF¾Ž]~mIxaÿLr­—.yjQì›qýã¹­F=~*GÉìw2>¨â¦«~4zÇa©T.:8'ˆóCcãiÝœLú¨jNð2q6Ù|öòš‘ 3ÓO¿sßÄžKuÂp ¼«^ò~ô ÷ÃÍìîƒ#,ÝG]<øÞ± òtxÂOuû¹éçÇ÷×߬â4·|ôuÝY}—ìОke ¥§§-¢PXHØŽ),³¼÷å¦`-”É»ÉâËav¹„Üç[—®[ÕàÙñ¡³Ç{ `³Þ„M ª„ëeÈõ†==Q —Ÿ69í 4:ʇê«þÕû°Â(÷¡ÌfHy>´—?(s a§øÓh…Ù,,Ù7–->,M"ë«õBZRÃmë®»¯fÔi°Éÿ¬n}UD6>MšFöSøðöNÙä+Ï½í šÃн¨Zßî(|V»LœO„Š9Ðl”¦1 wU k'¼3+ƯöêÆ·k*ö‰¦ŠRmzvI#É9XZ½Ë6ß =}óô}Ìn[´¬åýẢ¹ Q…Ÿ“-l9ØR¨AO\Yž­½’(¬ 3ó®#Hî"¨Ö÷ÇØdnîÿÜ+;žvU;3!²wåIå ›Ç*=C·ËƸ[ëË"¾ùåæ²³O+¶Ä h§Í¯j¬×Þ6¼ž; jDqéõƒFô×85V¥†M䇹o®÷^2åNÔÅz ýÌË‚Æ]O‚‰”Æ-ûXÈ0bXc«ºqŠ)5Ï ]@2ŸÑ³T4x Ïbþ29ŸEV¬Ýâ]÷²¾Ç;Ø›ÒE>#?ŽÏÇ®Aê\ɃћULÞ°+¬)Gß÷÷y¦ ‘±0³w]HîÈã! Ũ[Î6ïG©õ;ò¨”-+‚&n=ôꜽFžÒ· Øðê–—˜a¯‘Û÷LvNcî» @n2&¾G=‘77·\CVCV«qëpx)Y£íôÔt ²p ÖóŠ˜>Ïúͪ膚Z)‘zÙ)ÙʲuCºó ,CUÔ$oÐ's@‚›…Ùgá _Ô+¿n>2ôüÔuæ÷¨«‘/ËP­˜ïJËèná<&•ð²zi›ÑþÙéÊ,U,s Uxd(þ÷L'p~ŸÃó9¯æÆÐÑŠ¤¥1p˜ —®ùîîÖ²ýos¹Î•˜ƒ»¹:t2=Blfr\!›C©IDBôcjo÷•ý³£Ÿ•ÈßÂîõødÊn¹Jï;%ÀíñúàÉäb~ý[/ìîWý(¥3±Ùæ–çú(ú´ìÙíÓ¨rÚ§¦’[&£Ä£cÅrŽ•oAz,BibÖ)úÌÑ#ªû“ì×rk——Unˆ|QjÜFV%Vkãø7%è*&^rÝšÎÝ ”ï‘^ÆArûtêaù(½>c@¼v"à²wÃ]–í)TóÄ6[˜kÀZw¼Ã&û/ Zmm‹J EELÊÄšìFÑñþsS s-Ú_-(ßñ^¹òLVÅÀk)/ˆ•抓Ó?Ìå¡d³§´…ÕvD±Û÷ÄÉT­'aÏ”Pq(„ ÅOú,‹‘ŠG_v¦>¦¦»4b‰6ˆ‰.Ù©wEä`õSï4Ž`üGê4÷$<߈ ®“zë”Xòw++‹N>-œÔ9Ê\ÒãØNRSQí”-¿¦úÑ99žÖJ—?ÓgIRœf3 HxhRÊQÑ—‘õ¿/wµ{…ÚbÿUD'Ù‡A4ý”ΰ;É÷(=]hÞðâãÁ/›áFk.(T”{5ºQj²š¹¶DZŠÎ¨¤ºY¦i?¸æ5…½H¹)= Þ|·ù®¥Ñ(¬ä=GÇGÞhZ›»Ï‹UzËŠ‡Ëº÷Â?Ì\ôc2|EÖßÍo¾År_ßËö‰ŠŽa›ÝÕµG¨ù ^¦>zi“n†º·Ú•Ëhj+”|Óê· ɽëªÄ†Eo¹ÜÄ©b€‡§¶©OC^Ü¢ö¥PÓà]ËX¢³û«¡2}ášÞÅRÂçØ¨¢†#¼ÊKq>ëŒJtà¬D_µ¼é|@Ü=H–±­ò1ŽŽŠ˜3~R¢ µ[ÉŸÚçj2D8Aø\{(̼ãvóm•,c/C¯4ÖÃèmíè{Ê($Xê¨)îÃeÉÕŒ²ê„¢O~ë eETD9Iw7­¥Kf–Ô‘ÛÑ%÷iÐ÷ÕƒÔYú‰t<•B×ÌÕŒÉ-ÜNt‘Ë@¨HÃs¦u¦evÃLÃÐøPðŽËÚH“«*R­ï¶ÄQ숤u¡µ§ø€á+ɵOÁÛ_Ôo8_bVHßèMŠà{rG’ékXóFFvæ‚i݉2£hΆ8L Ø}*`ên2s(v$M :©iÏÀ}ÑÄêláôORBìïw}àôíÚŽ䚃RjfÐqÇ9ÄùV¶5dìã¡]±IõÞ¾ù° ‹¿Õ¨ñÚ³Ù]|’Ígý_sØ·ýØ’$U„‡`ÁwÕàÊG ¸¦Š4°8k0¡¡[»òª^HßÙÉbeB¬âVb“ÃGwl¡ Ånµ[s,"ßT}=ûmªí¿‰)úIL‹šâÑSŠ*6Úb”µ ;2.W¨½j·¨Ò€JÛmMm`ãˆ#AÌŒº­ £-‹§Ò™ÝšVð+/d<$*ŠR¯–÷0 Âz¤¼/ÍʼniÏÌ—qŽlnüaúe]nÊ<½M£P|úè&ï[d² yÝDËOåê¢Ë'g2¼“‹B":Ç­¶üs¾|g –2û,uåÎîºKÅUi­ewCiáί_›l¥±è8®õÙ2Lt\ÒmKí>‰X‹smö+ÈᤱE[’Jª–cŽ>q®º[ït:B¹îVJ•®/ŒŽUÙ+ÈÑIøµÅ`n¤ˆ-naIVX %Ò25à:*l,r;”“Ø´û}~ÐýJÜ"á2E£¶ùur÷P<; Ém1:¯WbˆeÑË^ÖÔ°•7¶iáeê,¿Nˆ{/¦æåKh{ŠEªaÈÇu¿`‹äOÑš¢½^úvk~!ÙøãuÁKú¯…‡š8;ÞÎÑÐJ]/Ó”BçÕy¾óEÒ:°Ü¶ïVOm_Eñ,Ë=àiu@¦m«É¨Åt½ŠÖꈶ`ˆ’)Éú½›Šæ"ëwLo{\ïjÝP¾7¬Z&9D¹æ$;Pì`ÈéùdðBì¶œ¢’2}Eˆ×Àí'‰¤âÀ–û–, š ÕÙ¢ÉgÍbbµ¶ $,¦¢¦øY÷üîP³Êµå >f™ŽðRxmyûöÛVÏSb„½ÏLËØ–ü¬¤ŒÙ3\¹rîöŒ])j4¸¥™-¼rÇöv±ƒì<–¯O3C»dž,÷ <´]¯Ø°™¤µÃé)¾ýj톦“ãhUýꣃ„u£­ïq„;";‰Ý·¶íy©ÕòÑø?nóAë?+Çó¶Ê3îÙTÌ‘C¾˜úQ>/®s/Ë?tkÌ‚³uO5«ñ¥ó÷Ý—QÛÖB}&¥qzù¸Vül À&ˆÆW»´\ß>°›—†Dþ4o¸@˜Ñ(6åg•Õ|÷0˜å“Owx€•”Í}¦¸¬X®UÚЊû+-_†½ö°ã$ïÜ–|]7M9¡¥:_\ŠC"<Ò$yÖp•÷žoÖgËÆ9¶Uº.vqÍwòëø{Þ.Šã åܼ’×È+PÖø–’üÑe‰îžêKc±Ø7ôµ-Èr›6ªÀöËÅh^ÔF@@qw¥òÖ€8í¨|%íjûüŠê¢%EHÚó¤5~ÇÚbQsy¦ƒ/ºSŸ)©Îx?_*Ä!ÕåJÞ¤w}„\K4þ&#M+šø5ËZ$/?|ETßK;M;lƒkÚ5»º‡w_¿ŠönÀgyg[15Ù÷ÒÓþ ž˜¸É£Ì;®ýZÀî¨× šIGXTvÊzš:@^Œ\j²P_žHrˆÝ¹ù¼XÏï¿Ò©j’÷rrƒ³[£ÁáÖŽy,èù¤¸"×;ãŽ%‘}ò<µ ÜfÞ,¯ïvî qVoÂ'qÞNZ ¿À%ª5ª ¦åWLüÒ:Øß„Â)>V‘Ç»ûJ à÷w™§"%‰·¾÷p»HÿÒ&‚Y¬_Nںט›ÞžÒ“–üö¦éÑìé^ð7òM™]S!|—ÐjßßQº7*­r/õ¾zf*}|ÿ1Ú{â¬5XrSøÐA ÷ᛯqä-÷ÒXm.‹Éb`†˜°‚ôçKϸŠù™_äH…M¸>|râÄÙ¨e%ûrĽ ¤xæB­JÅrƒóJ˜®’°`Ð.F‹•¿>Æ#ýe0ÞTM- i¹¨•FòÄk¡°·äo‚ ÄÊ&ÕÉ ŒöŒëŒÊš_=™âlçº}y_RÖ“[ÃFüµ åÚž¾X“ItÆZ §‡v“°'‘Ô¬§ÉDéŸFË?~ª|“uç1r\ÏÏ¢Ú2Å¢z#Ï»[/çid”åÔžÿÈKU ¨K[ú”;³6´ú"àè8ÛåÀQƉî«yŠÁ\òœˆØðØc0? ½¨òU¿èAËTá×¥NÎÙŠu‘þÝŒmÁ惂ý›û±&£÷;ï7ÒÖ†Ñÿ”—Px¬\&$”Ã_uß(ËGeŠßnÄ«´áø°VB²Ë­3ä±Ü[¼!bºçaüæ‘Æ×E‚µßnO&ï“NŠÏwÂò¤Ã0o c•h¶U:á}Õ 5¸ÜØßI`³Oaj~eWäó¾­#àmæ›mšvåÉ";)×d*=ÚØ'Tk&àk…}é Ôãßêw?Õ/¥q{/<·Ø^ÆT­“U'²g\ñòetÚ^±´n•õ/fò5~qùƒãÿP¾c‹g%µªugI¥pÅ[GNJ»«…$k»¼´OFj]A ~©”ñ|»ŸjC¬¦8kèOÕ-FÌE…Åæûúj,ˆ,3^sæªzïÕŒ[,¤ÈHúšÈ6)Qí?Èn×rzæ>Y…VñÕäS@Ãnòl—é¶V=™HJ–7´i®j_¥¿RÞs ÄÂÿ¸¿!§ÛÀ NãÃÞ—½”µÀ..ý rfßø©PבÆbÑíÑŸ“ê÷#b…t¼E#Õ[M·0á=U Òa%ÁŠSØ— }«pˆñ½m6 ÃOÄ•%FdÍrï†a5¶&Йnòªé%ÜñÔ—áÏ9nOª…‘Èo¢Kí}}©ôÑöK¨œý‹šPqÚ(Vn«gô®å†Õ#Ï_ÿàÛ«hT"9èav²Ñ8~®ù¨zËšæ%u\+Ž:I°Í£D”äîÓ¯_çßæ´¨Ã®MS±ûRM8æ_®Ì“m‘áhÊu]þ’AdÏüÀ©ûF(1à[×@G‰z°è$c˜eAnSÿ 5êcýÏ«6ŽXGf?ç•ýŒî&$œ%é’eÍ÷"­´A_ÍóÆ_û ÕïxoÏÖÎn&G8ä p–ˆÕZÔ¯P¼| ToS>‚^&E¦%†2ö¤9˜e¿y›­ö©`Ý÷TÍo·8}ßjîcé5I°òøöu}ïÓ«lî{ÞøùM@}ýXUá÷hkë"E¼ODÊ - Á Ÿh¯2Ûg‹üäkí;‹¥qlëË~ízZM›õ|ñ\™ZmD¤]×´’ÚÑoÔ¹U^çK”b4ÂÚçqI±Àâ²|‡ë»„|-¾²GfVI¬^îÙÇ%Z—L—)ÖWq¹b@®”ƒêC÷ gwG:¬(¹eÑäžÜ/_žÍ÷ .u¥ kãŽ:cR•k³QDw8å/t†îGM… „Ýäó³ÉÐ Í{òm~ÝDÔ•‘]T@í…òsB/ÐN» §´N öv1;}3`†éGýÒ<†S/) ˆI֮;(D¼ËHèS¶ÛïGªfT³ÚÒÓóü§±ñ°HàÒçHã¹ÃMµ5o®Í :âùç39uTÆœß?9^Ur÷~EÍÑ2©ýÆ(ëÄá*ýQ™'›ŠìøÇ’Ì¥x m|è¾Ü×ov(S&鯿Tñ—pÍšê»Ý,‘ËS©‹uðz¢MÄ]{`gŸ>M¼hR!LÉËúÚšw»–$ÅÛé­àR†¹òGñóªÏÃÝ4…èn|yëTW&…:ø…‹/N_MÌÑy‰çÔõöü>çXYB7t‰î¦1MI¥®±œÁ×”°;凓]üŽÍ‹Q÷TjQ+HºëC­nªÍ~¹±×6ø­l0%ëmVöð W3mGÔPÔn+”ZP’L"ÄË&Nþ¤ÒæmÓ¾XX:ŒG$vë‹Õ˜,lf‹¶\Êcº†¡´šïø„H—X);ì°–kÔª©ÅœÐ²ñÈw%x­MPŽAw ”½eÒy{òåÓêº ;]Ÿ×͉ጤ:æó8ªZÍ2˜>Š9oƒrž*]š“Á–E5Á ñŸ•Q³É²øFðèl3G³‘­–¶’:ûUõNX¹zd°v\¶ƒ¹?ìõ,™ËÅì\õ “uü¹d¸›Áy¿C³:«û°Û´Þ‘~áÎ@–@é`ÅãÁÄžv„](ô!¢k”÷xk®hfè1 ;pR9¿zªò¹öšpB³¾>ÓK.|&ü›š+9—ã€ê5_WVÂ3,¥_s¬Èäº~bž¢±l!ÞÔ+$AÁHÑ#x½hÚÅÉÄðØ™¥@§“‡<@òÅŒº`c*ÁŒ(vIP©‰«ÐvLàe^Œæ¦ÎÇa͉½^Ñ^C¬¥H¥Æ_^rF“Ï ¸zW‡”¶ÿœš»gci‚ux™2Í :»õâã”òà´F~*LFÔØÎÅ€¥±'Ã…_[˜ëLêœPå~öÍô±M±M`ù׌ûí`ŒŒmd-¬>VC¯²ZþA&lŽ~¸§mÜlÑ·a¼›l®žðÉvŽÉèŽÍc¥…žt˜XJF>Xƒ¢s¾s¼“a-³xeu^l½£¢Iç@š;š¹]¼Å|§)Ú‘Ñiñ ÷#ä]Ž+#&4M·õÔÛTój½¡„=Ý{”m/D{|N)ˆý+€ëéw§ØéòOdvŒKüIcøåíŸYX¾4ÝEB1ø’HÝðÕ]Ù`² ©Òô+ù9I¹?·Ìoûy‡eÜć~ïjÝFèÙóˆ"+xîØ@Ív†è[Û¾W|÷ ÉC¥XXO —TDËi‰ Vƒ„‡ëZ¼ Élv³êˆØT ˆ×i,™0)W“Ö¾ÑSLA§h¿nôâÏ\Uûî?[œÜ©f¿æhýôñ³Tš¼ô‚G²ž 0ùÔðÔp݈ççéLgÖ S FÀו’éG±Yïó~.jëÒÐúT·þ4H=Ðuáf^•¾Ué“…{¨Ù¶ëÐ×5/µ¸·[¾è1Cî¶YŠâ3+ú|ÙÖQVŒKUeBSÇÀQƒHò“¨>ÿ 6_¨a»ôòµ™ÎEªò{.øÉ·ŠÑwjúb¬Ó:Ý&¹enYs8J…ƒ3Í/¯lÚËZ• Ùp‡‰]¹œâÞ·@`¬N©¿7¯D æ˜p˜jˆJÆm_jÙnNq¶›èØÉ ÂŒ½ý¶é¨™Þ¯›{c“’Ô¹V[²kW˜x|Š‚1ƼÿÊËUN¾È=žŸÉc³žÒ(Ë…e—_ÿ@—ðmÄšäÙ³{õR”ùéÃΧxLõʪµ|ï¸45k—|âô7½Ÿ£KƲ®ë6²ÀøðìñÀR+޵¨x‘è(Š(;¾™O°ÊfÄÌ€ô»Ú‚oº#ïáÄE¦"íé&d~ËMt 5ABux-C­£¦lÀÇ\ðu¡mDʆ ÙæÞ†-*–èSÉúï¿´OãídI!Œ}bØùJÿ[¾Ø+{ý)/ük#:ow²¦šS‰K4J¾I]ÑY¯ ~ë­±o-ƒ뚎×g‘Äaµïï¯ré²®F5d£…΀Ù<4æoXÔ IëÒ¾Ø@ø[B¹°y`Ñ‹b{]§n2¹-ŽÀX,hˆ¡ºƒ˜ÉdWr.õzhÀSÇ9?FÅ2»öÕ ÚŸï(•ØŸïµ\»ÿ “¡1{Àê¨ù»Âý>4$½×W•Séœ,ëmÙ– û´±š±š#š\Æ‚Ê‚Ž‚Õ·ºî—Ǿص˪¥L²Ï1eU{·,»ß«ešëg™²*¶Më9NìíLSÛ¯¸šåët3 ûe¦H_ðG²±Å«ô{2Kl®×¿h0Qʸ´£.ÿàygjù’â&æm£K¥×ÉñW۩–œ6¼ìKc;]Ì‘~J^¶3úJ ³‚o+sêÑ]«4¢oòޱ¦ðDͤ°Â@Ò ¥z výCAð5EÚdÙÒY»|Œ_í«näk:hªp&iÆ]¿/Ãæ=žH)ÅÁq½{ëú°’5VšxNæà\Àp°–:`Øèc†àˆÙý掭~ÆÜTwƒ°B=2’ç9›?l[[#•p¡Fw÷úÞ-õYLÔoÙÊŽâ3ã܇Íi’`Œl*n¥ QÉ÷ï-o,˜ðǦeΈ÷Eó¡hîMúP®©o>riLC~EûÙ6¤¤G:*—\SDÖxÄzj­gÍfÐJ3ÁÐÁfSÓñ,F5âE-ÓxÏVfáˆz—.¾þƒ°ÁM–b•2J¼ŒQ³û> ßÞÚ lxÈGÆâ­ÅðxæYY0Ù8R@'úBÜ~\Žòšb‰ ·‚Z½™‹I¬ËhhjK{¥{yÌgrc×ÑV¢6†„AÍ.¾Pd¿V§Ñœg/²ÕÖÐþ]g2óS‰PåwRV(ëö%¶÷ m×;M‰ƒµƒu)⿊QÚ-Ô~„.\KËlÒkA›Îaô0©úA\ uµV–ÓÆg‡w'È%ZñN¾äòD1ßn`ÏøRóƒåAHÖçí5ã/ã)‹­#x;{qŒCŠ•«žJŠ|E²)âÖ«¥P³í—Ò~:Þ˜Ïù9½¥¾1`(†yÔ&`òb¿óh²½‚)']û›–Ú óûäGeñ¯“ÅÆŒ÷ Mæø]8$ßÍÏßz““$«é›3R_@Á÷2j‚lâyú–Ìh²j†/QÄ÷p¶ëÂÏ#XNôp]ÄÂx.Uý5û°Mk®u;çjÁû¤Ê‡æìï¾ÌL©+ÇRþP{%Ìú9’áÓ—#"Ç-#ÇÇ%“­©Æ¤vû9Øf$ÌÚbb3ý±Död‚mÒ* íª§wÓ³ÒK¼¿™iL{0¶Ò_~½ˆraÀèòf;ðÑSŠÉ drúkeþë|.?Í]ж²üÎxHßСˆDQ«¯ˆJ+s«¸-¤—ß]L¯ïgòa%‡¼49È'’\üsÆ¾ŠˆÆÝÉœç^Â+ߪX1MÖ6±ZoÝ~uÀ™‰ëHáeÎÆPW•пpc”0ŽÐçÅhFÝC3’¥ºù7GArŽ÷ˆ× Ó¦ÒZÛ6úÔ”] ìÎ]ºƒ{ïjóÞÛY¨Ê¹ºÍ¹‹a´ è÷]BCïÈ”xÿ¹oº,('4O~ðª.MóÉí62†±'ñu@ðA£KŒÓ»‡—¯Oª}l~5¢~+¼Mô¼)AsY/¡I>q­­.Û ,óÍÝ8ËH“È'áìžf ,ù3äÍbÉŽûï÷gŒƒ®“Puô1¾Yòí’tiª‹ÏiÖ–ß\qœV S&æØÛˆ‚ì/š£WãBM*gS£¦'µ%¿|غ%Ø*Ìå[æ;êWiØø ¹ j^84+ÖÃÙ}Ø8UyTÛ×.,sûqƒ¡ê «½]nUO`·çÖçØJµ{Á¯Y©Ügù>™„>üâ‰Î—Zú…sðÙ¼ùuŠ%jó¨‰%›9wÚòGºE)áôЦNŸ<5Šfô†Rû »9 ­[Ø•Ý(¼n$lB¿ý˜­©Z†fßÛ_Ô¾Û”Ix¦'T¾ì+(»^>‹b³G¸üø ™úµÙÒTe%/b¶»¹×ªªªøÉ`,Eò†ðøS5ŸèÝ­4ÓEÃÌ–Ç“jeRZ¹ô;Ý/<#|êÈ ÍØ¾¿“Euù@žº0$08Cå[é#ƒOKöÕpÛüUÏ,FZÁ'Z_±…‚%ÅÑ@7ÎOœ+Ó)–µŒ„(´Ì_¥9›æcr&(ÚÚíö’+¡Hº ´ëÄ:bî­PÔÞY.¹nËØâ41'(ëé°CJçfݼùñႱĥgÞYø©\š¿ôÃ/U+†"CÑãqôΟt ¦<>¼øxÇ2T›wµ-'ÕÓŒG˜5šU]£d(«S{fŽêüi/àð“KZŠ€“"úŠŠ†ùÌ-íXØ&WÍ‹ªoƒzè:8 ST£ú UæËbFß[Egh¾±|óüÍ´Mo¾>Qb«þÔ‘8M2—Âôm}IǦkZzÞÌdŸB>é4%Õõë‹× °wo³XTG\ T™çÙ¨×Ý][µ½QIxc%B ËÚ‡èÌÁ5½…˜.Un­û31ÌӸɮKöξã^e/Ó7™~„¾„o3낸Ù;3þä^kº¾:åµ7€š›*¨H†)àD¨ïº\ñ£ñ ~¹Ù¡¡(ªÙÕmÑå.«ú×\R¢è(\{ýµåõŠ[–u]§òoÚáÄU§è|`^si:öSÁQ|¥ïX9$½FGI2¿Æ²­Ñ›8KaL¦K^†á…Öç§éo/Œžtûûý/k..Ë®Õ1ì™IqbÆ÷Ì‹¶Ë꽸,ñúy®­D¾ýÞ¸btï~ìB¼\wßuÚ¼hz¢wšc×ÂÇú8U#XQúÎÕXÊ«†}_í“A!¯7õ)›RèÕ${¢Ù±Si~XI©éÃyȾl?ýv›ƒŒŒëçkÏáél‚Ò«¦ÒÎýn}Ï–ÂË>*Q=²Éåeh›¿qb씄ªÌæ­“öš¦CZ”êü¦%a¶ïºæE“m«§¤ªÇ/-’æþÖ~“cLùþ½beGýûÍ¥r>D®{€9˜¬y(¥÷ƒ¡¼œ³*¼×ß›iwomì!Y¥/iÌ~0æ!«iE‹dÅ[ÇmÕɦRÉ£eÁÝóªmvÓýa!l]Ézn5okˆMB]äÐLo|ÑF>¦jáÃ|Öôh~7TÅ–âÁd¾Ê£á”±©_ÔŒêèJ™M†«FC†“ä.3âqÓ—à]ŽÊ–‡¯³í{+v«Éöze”A¥é’lã«••ÎkR}s„öžÎÀ'q¥‰e©_Ó‘® Ï÷_J{’z×ÎןôÛ€ãâÚf¾c¬g¨^Áë‰Yòä»,k`KdT‚€m­±3öúã¤VØ8UÑý[+Ú‘îO•‚ê†gu[–Öë­:XVýÍ&'Û×Èl¥ç-(Ù>Ç| ™í{žUPZöܽ| N|oÁSÜ3Sx±¦§wA(é«¢~ÉÈ‹ý°EÓ}®7ªHVÄoˆ£($<ü#¼çɱêŠ9u4¹1Ëhh¢9µoË&k>‘O¾ÏálÆ]ަó„4U‰¶úóβy×ÃÙmÇ×b/¨c'XMW!KD1¾îÛº°¢¹›Éí `öÜg €Ñx߯¥òI–]ÿE¦kõ†ÆÀô£ÉÜ»Åa·à¡èÈò'‰±j¥ºªØ^"òâ~µÈƒ]!Ÿèkܤºfô#Þ?kÅ[U·µæË{M+×X_[$â,sUø]+m8bîý˜Ò îíöZ’g¤T®ªLR‘‹ÏÞR²¦©˜´2hºt¹¢W U¤s¿÷RÈÕýŸÞ"ã,ŠŒÓ=«¬ôc<* ¼Q±ÃE©Î¯¿¬ÛOU òfó\åê²Pä¸òŒŽ×#ÍÙõKg%ïø –9 áEo¹öÞ™ù åR§¤ØõYÏ òœŠ"‹2u$7pPR^½IvÓ RïöEññ×ÔøšÏâÃTÝ:¥Š_ùø_Àc}Þ,½Œ¬1]ò][åõÃ4‡¯ ì©%§3jÜ(ýFÐÅàö¹¤øŠÄØ6Íkßµ—E$ÖnoEnrÒSZÚ *ÕöÈLŠ ý?,NÇöÎ ÚLJÉß!G+À¿¿†öJ •±]ú*EéçkS( ¤v9=Ï‚´ê«ÄÄÚùÙŠÂÅ;¾Ì&KE ßY1È&$—ˆÓñÆ|-ùº’âg _õŠX¢$Ùì¸ö¸¢ù›mòö6ì©–œú뾇æ²ôÎQ‚ššFå˜ZÔ…Â#xPa‰Mùe“EKšY¨>täqu2ækÕømÍWO›ïHL­VQgÜ¡[0Ré¶?kÊyex9(BÙKõ:å¬å‚>©(—¢½k¡Ö¦…åì´â‡Ðƒ»GdÎÔD{½ ‰8¤7X¶^bŠ¿ àØm%«âGxÒÔ¹šîó_ ¯+qYጺ?®¦Õ£L¶ü®€Þ*{È8ƒºlDI1cR?´Oærë³Ø÷ˆ‹úöÐ0VCÑ6´ž%F½[ýËF²krþ/ꑌ .UÜø.ðÙþêÜRü„×u=ÎÉÔÄÍ7ÄÚ´ýGv2â[ÓÅÜe^­Ó¤—â|³½ìH1z·üÒâ’ëá'u¢§fœ"ä¦f?W ‘ƒÌÞ èßþŠSú¾EñN…=aïQ¥ÌQÁ˜E˜PÞ j­ZdjT,˜|ÔòDwÇëGkÏs£¬M áØ §éŒl1•ÖRÚ¡º²è-¨ZîP VØ­8bî8ifš«¡í$WÌõ‰ÑUg9Ä‚oQ)µ¨ùù+’ÀLßJ+×YÖthd¡YêQ¦3ù º#OVAÞDùþ\F^€qŒò™ü›'"¥hbzýïªÙ%daœŠ÷Üo¦+(½«jæ]äºËšÑÜëÓðeíÖÝÚGƒÓ™¨œShšŒÑïNÝÿhð vÎð™<Òòƒ»÷ÚÑ1îö–þøIC—!Ëën\å6ß®µ‡­y¤ñ4áÀÆh2ÚDcÄW²ÄÔtu-xîÁÃÝ©€Ö*WdªÞ’89qÉÐfy¤ø[¨4MdQ{ü(o‚—epíÆÞ2ãò‘k-÷= &–$u¥ø¦&™ƒ†ä\.Hƒ~`×(ƒ»=“RÝ–f[„ht¥Aßß8”uY9R™Î´Ü[$[1úŽi²$zåð›*bŸ’;Ä„¿MnT'Jy6àüŽ©†¾Pˆ3x°£œ¦)IdVµ­ ¹Uî¼{%*}2ÍS&\xe–½ð£ÓÒØ@«$žQ‹¤3¯ûË©ÙL}¼ÓA‡Ùh5Ž’À¾ë?Þ¢gVs‚¢ÑmÚ7ñà¬.‚hp´çe ª›†¼ô)•ÁÕLð¢P¶íhJÁÂ%kAèZ‹jÏ`‡7Q(ŸâÕo ¸ó¢0séš=H‘L•vÉ‘c?%%î‘P½O•©½Nâ ;Œxì”o¤qC&<”1íãý§´ÎÕftÇGy÷Ùb¿$A="áû]>øRdÉÌ„±‚éîU–ÒîiJÆ®`=þ˜K¶HÁž 7ðDe/;1ð„d–Á$í5AAí|é îµV¼ý„4'Ä…(–£ü,dÒ]BB‘ji}¸Âäiâa¼Þ‹,I©ˆôó^÷õ›m(I³JkѬ ¾=øŠÕ³øQÖ÷}"1+™E=¹ª™Š$Òg_¥áí~ÛnQ=”&™0¶|ÐdØî¶š†«M=ï×ío“8ïóv³2­=å~¼,îTWó”>9lá¬ìsô}é›5aÒ4¦ eÓ5}Í>›7ÛbÇÝãåDU=Dž7¼…~ÝB™è€ðZ}|öºǸ·Q§ÀÂhè#M9Ô§ÑXÏ}Þ>3Õ]:ç²N'ë–„,¶Nà'¬N£àÛ/É4õO£D I«TÃì0æ|)mài˜ÐÞƒ0™)Å~yì‘ôÍîÏSB@¹<âÙ2%õýîÌ:Ü mÖyVÕ¤o¾ò7c¬Ã¾ù¡ÖâŽ|z>Ó7•ƒâjäÏjtà* Syâwíl}l×b­þ¢àkÛá1õ\a”;ùe¡±æ‹¯µÄô~Ò¥J)’hÞ©ŒôÞ|áû¬Ú¿v×DFÌ7å€o£Á`IŒOb.W õiƒ»†VnB•BDàÌl–B}Ì =Ðü"ùY6•ëÁ®7…=âP‚iœ ŽgF6pZIŸ èóMÀó¦e˜ï³ŸêÆwTès* §$±jh~ÐJ:~ª©¦ÁTê«0”ÓN¾9b%üAúÑ˺5(®ò>Ø×°U~Ñ 6ÅŠ³Æ¯9rs°Þ Ž‚žÕ;Iöá Å|­,݆¶3"¿â¥tâS"ץܻ¥x‡3>ÂÏMu™ó#g›ÛK~0öÞ´îüœ:Å[¹ +ˆL‡YFØ(Þ¤WìSUDn3pQÄóä£ëuá½6ç°ÍüqTJ¶ÎìåUpSW›XoWP}+ãû»K IËŽT)÷âÑFÂòV#¤r±FdXu_R„_¯´T˜%ÕÒ²zæÖ}°"þS7Ÿ£PR㎱œ>CÒQä]‚é¦t!âkh²¥[>¾ýl©ƒ«&~ çp{“rd­uÂ>¥Éåªòi²,õ(Z³|*íMÿdõ5ìíÛ Izÿðy]þÓGÃæ Àˆ%­Ý$¥áÖæ÷N8åB¶÷Säõø'J·'Àïîù‹~GKÅÓÌSM°H}¶¸SÊ\+k LÑÀß|²² lMЭq³ ÇØEZî]×Òëznü¨'ÎÀ`Ÿù~ëÑqÁGÌv•‹ÃŸuºZ¨d9cœÝ§siÞda[ Î˱E=)åZ®WXlÊÏ·\·xhRzM™YÍ;þñ~1ñÐïèƒæPôŒçÛ¹º•Pã6Ö©„M&ïÎC¥Úì¨^YÕ,ƒ.ÜÍÒ÷*!~æÍÏsÐòîú6ÖnšZštm%²R&;èåtÙ/ªíî1]Ž_#X'þYòÆ>ƒ³iÞ¯?Ü™Ä.Ï8z¸%¼7/sPùÍ1 g˘«6¼”ÚËUYî' íY¾©9òÓy(ñÃÔÏ,§¼ÒNè~÷›E”AÐ’¯ä7iºÛPI¸^{ß? ãij€b©!+ J ßK2(Ðæê’ðˆŒGÃág‘mú˜Oúy†ÿùQ\ÿ§£rm¬Æ=”Ú…±DëÅ„)Zÿýâ²t òVcCãyҒĬ¼ò¬¼„‡C=4E šBh >2ú®êé®V>ãÌO‹íD³ ]µüþÌŸ6ìƒ"…Ú-ÕþŽaøaÍ:û:8ÄÖƒ@¶YO©›e`šE–´3Q_Œ/F%¾zhßú¤8'/+#«ä¡fY×{™,ž¯«üÞ°Dòe°Œ<šmúÑÒöÍLÎ6ÔEî©ôÃDï“]‚rg}Ú·7SßÖ¢¢,xô<€æ¿§í¼ÒlQçð \”µ`¤gªÉ¡"—³a]÷P’{2Ää®H4F¶âdêë(|Ÿ@µ9‚g‚©¿Aà ¥~Ÿçý;仯Yà ÝV&1ó“Ù(ñ¤X4Ct÷OÕ´Jy?úžñãÒÈY{L†ÆvßÌbzQ’“’—p+ôr„™&NfRÂ3OÖV0Àó‡tL]¸ss®c3¯ðãbkïA3mqF;ñŸ× Óû g DR±ºÃ†™„½z ¶¦é _õø^Z·Ên ÁKìqc'pŽIðXê–¼…ÿÜeZÅŠé[eŸeëûg—rKª—3ÕýÍ ŸJÊA” ä¶>=Kð Žï¾nQ”{/Ƹ—·è&ÓQ-1.UíçT³šo¬Š£ƒWX”ø2^’¯~[ŽÄ2üü¥‚WyÒ_ÎÓÅ&kLHo¤m„ø s!«‡º?nIM°¼…=޵^7;^Ornž÷f{™w@‘Òð#•9;_—Rò-«—€ì3ZÏñ©~L±Ê3¯_{3·”jqEì'úåÑR^qæ±þ*ÃsÇvÍh­ˆæLšÞÞºÙU­²˜ßÓ~ŸÊÆ0½(£<(E 7 é¥oÅx$e‹ µ|Ó-)UÈ#iWU/CÍwgÁc‰Dóh¤x{ÈÉýſٯ÷åiÚ½øU¢5ü.é—H“ÜQÚXÕ;YÕ›ÿÿ²&_n1uÌj{àWJ\`Y®‡ÊQzy*Alú’Ô2R`ç KÕ¤à7òk¤æ‚[éÄ^ŽŠ¯¡ã?FÖ.a`!!¾ô“Øê9Ú@ìæþ'¹¨œæw^ð0_šª^J|³ú §U\#ø‘úFI7!†[°U Ÿ'že¼#‰û)GJ ¡‡²âÎi©GÄbI5“}ŸDô€^}‡DŽØ6gW{¨=É"ÊkÝƪ.òÀ­z¬dû0B!mbOGìdÏûônëõ.G½^üZzX·Ã÷\‘„pTèv~ðÈÆ¢‘§¦»<”@6Ê~µzg•ÔþòW~œŸ<‘è³7´p)xw9’‰yŠ@vÁÉ’A¤,D®»·íß~c±÷ ¼ñõ±Ú©ø&¦žÞj×%ï«·ãÈ$(ý1#ݨ)~7SPFƹ2Ù[a’“|³ž„•݉sz1Ã&àËÁ,õøa?p9:Ú.|ððŽdN`â$Õóâýàýè}ÎèhÒéú¯«ð…6ŸÓë«b#ù\¢õ¸ë09Ð8Ö÷\®(®t¼Ú#Ð+Ø&d·L'YDç[.ô鎅Iá•Ülöôä!™ë¿Ô*dg"ÄQöÁsÅÎÄmúlÇMILŽòx®ËIDLð] ,Iè¡:bW£§›´õ™8w9‰˜ ¾‹…u! qª#v5ñkY¨>ÒH¸`Ä= …Éàb‡+tS½ë’ÐÍ©Kü—Ñ‘™îx$\ò¸‚‹¦N…~ú9wDZC'#aZ½ŒØá q©O" qÕø_æGAÇcÒò¸äŽË¬^~úvý†ºü¥šj&:—“÷¸î+i¨ÄÞæW˜$.Õ^ÿÌVuz¼ŒK‰{z|ôóò’ü·K?pã ‘‚5ÑO.> 'ÇÇ*2”Ë9 ˜­(v¬Õüæž—mqÍà»ð1 ÐÊÜKÏñM$!nõuø®fÕÇé3bçÑŽ‰{àÒÞÊ´{) ±óƒ­_"QGìjf4_ "a~ÝŽ»ñã(<©S&s£6Äè ݰ —欃Ut $‰®üÖ"löæÌ½PU?o…7OÐrò ( Vx4rJ†iÔ¹8Çô]u!{cöÄļªÖ.sH?I&_Ñ1¯(VbäbDÛûŠM 4œ2´1Ü4Œí½¶¦d`ð#Jæ-ˆØÿ©ì7àèéÀݯùÆncƒ4‚XïPi(à=Dm~£¾) ÉèÆLâ¡,ͼ-u¿@2#8¹[U^î]G›€'‹~‹¾㵬(™¹üÙë‹æs´Ï§e†;Uûâ´ŸŽL-Úîý\*ÆÜúþðQiXˆ¯íì<Íe¼¡¬¹¹ù;/"+Sg&îÐZwPàß’c·ë‹ëÊ‘ §ìÇ ÿ¸:dv¬gD©I2Vór–'“^„Ësuâ>”¾Ñû®Éã?FÕ¬»R¡¹yŸéЧ~é.Å#¸ç[YÓÈŠ{·ûΙ;<Øk/?_"н|etëû;Sº\e‚aR.”vJ³)âË\›ý nÎc”qœ¨}ëýTŸ­GÙ˜¦ *ó”}T)è^ ê°!U˜è’\^ônE¹üxôéCd>vÔwü6—7„ÞmñQ4ëN\:à*u1rǵ×§y3E„ÄoôŽÏUµ7;¡Ç|ÓûEófüv\[Ç¡”ëµÏbDr/ßä:ì>L‡Ç½!%YÂTE‚Žc&|øø3žá½iŽi?Þ&ÕwUw›×®Ç Ë!ÍX{kê*@ËE˜•Ô;>a8Žhšë{<þDØ!ÿkj|5æw¹®ÒdÔD/œþ´& ”Òøžç†#?½åäâÛØJp~êࢹoð >FVôPL5hz²ŒƒÁТr?{ˆÒ–VEÔ³#;Z*Ñ;1Ÿæ–à“×¾?£¶´mi³;ÅÉÂ…*ø«eÐFÛCÒIÝâ r(ž?4>GÉÁyv¸ås¯m&Äp?<4­¬ñÁ¦Qld‘g$n ¯sjµ+zBz· ™1¬âU„õ•O~Ù¨åÅx¹Â ®Ý#ø 단Ùb㜶¹+F½ZÓîoÜ48’qæúÈìëú9k¶²í×q…ù©ëT¶ìL?³g§þêíÙõö;{A½ñ¬–9þAÀ 2Ž^é+”¤"½/<—")i:¾; ãG'ù‡¹\bÄ'©Ò|…“P¿fÓ¥QõôYUûõþ[ÑD1„cŸ_¿ÌEfŒ¸C'ž‚‘›žô.æšHZë¨óê ò€FCæõJäN9/ÞÃdŸëY÷v=jø$¶>_N{~‰mêˆ#tõA†\h‘ùÖ¬}VñUÖÖÊ2KPËOú÷ï‹”õÄb¢ÕÊ̱Ía^rù)WÔzÐÞCʆÉË|Ç`YßÇ3'Œ•Äûð n EƘYu(B»,Dʰ¾ ¦‡4ƒ¯O3E fü¢5Ôv—]¹÷vºîé#bÛµ;ò!o…aè­[†%ëEÛ VK¶ÜÚÄ-†¬v®Ó·jªãŠNw@Ñ«Žð<Ì߬QUpÄμ‚$`˜ÙìÔ€µ®šù¡DQêdCÙdŠ Úö£C–±ÆØ¨Gh>“82Þpm•~@ æüÖüV4åIu²F5]ÿÔ;¼¨¸"2-¼ä© Û-„ÛŠŠn­††´({Œ=>2=¼LºO”Z¼«3ÅB{”zµ¯§¡ý˜›­tzùp[WÉ/hi—_£‚n8ksžÍ™Zγ–Ïm‡:Ñ™øŠö±Vùù·O¾ä5dÂÜbG›%õ$ˆ„{åöC”²G]³—µ¼l»‚yoŒrí¯EÜ›á«&ƒòsaxìèTï™7ŒÖt‘ %J$‘¼*•(j{hm5Ý®;[c tvž¢ó+ñܳ6+矷=ò1ì¥ê®Yc".¢;º…‚#ä,‹YÉêÉÆ54µãWÊV™Nj¨.~:4Ui‹&Ö3áôºqÂÂÙ‡úè ×RÞS¡ÚHš–âù¡ «Ã…\ké|ƒòÁ‚ˆµgC›¯ÁÝqßkêI÷ô_ƒH~\C1r7¼™]OQü Ò…`JöûÝ’KÜ×£ä–AÓ[î%´ËÀÛ•wÜÕnWbg¬d§}|„yXx¹³²VÅjµH}W®ºAÍbVÍôóú[õÝ^pøá=.ö ÕÈCß…ˆ}iZqgZŒ݈I¿H‚Ý0fÉ#üÃÝ$ÀõfÙ •_<o¶0¯™±®3RxÀLjݶ­Òåw~ó0r’Ýù©4ìðÞý, »ú» 6»jô™»J×r^ü¨z4š¨)üó°â}dÅ(e.ñ Ý_95ø¿{æ_;ñ_?ø’í_BÀL6³ ÌÌÇÇ_/V Óñ‹ùãó¯÷§ýÑG·#Îýó«/â‚Ϙû×{ÄÄ9Äx6øØ_s÷aeý£/ä”Ö¯#â¢ý,}Ä|„yùzæq ýùA˜¬„ ùwÂYYΆÚâS1Clé;‡S"¾›·®s±Â®<t¾Ë'µº¯JQ‰ô΂»}=Gu“þÕÎâöœOú X·”ïØªÓ¢@/ߺ…£ Ãü@p›ƒiËiÔ¶•skÈriÛ¤†çŠ’ü\¨¬äõ+z?/åÖˉ]výØÊ›Þl1×F2×ЛáQ¸’s_©ÊÓÆ=­ü¹ù*xuöýÓlMzÞ A.öO¡|o¥[xsò›n]9|åj`j>Xi¶Ôº-cËÁíN)Ì1€Þã*€•êùÔM´þHé:é€K—ì·Ò gp„ˆj=9Èøzyèk~ª£s˜36î¾.˜Ä¨}éã0;ÞUàîã°Ù$£À@â4ºdUè.s´ÕÕ-¼ÕcR4¸×³èñì^K¥Ãö`EÝb…Râ­;êOSò­·”· H›;ë¾{ã_UçÃ+nkÇge"Û'±\Á¼…-~Ëù¹“æGÝ+?QÍ;\神³W“f0aüvC®5~é%#œ ø„sßEãDÞ¹ÚOº$,¯.ÁHF8¹ÎZ}{£ÞõÖÛ1BÓŽ:´ 5Ðn&{Xcs†»GC¯UïJd¬©|ì'­ùþO0œHÙ œ¯hM×P“€`͆ÁÛ¬©¼!©VM«aášíá”4ñ—£Ö˜ ”-Ç>óÅa"ÝÓ½+¦ÜðãÙ|—Ñ’žEŠ€$v† :ßĈ4qR>éݰC­ª~{¡VPœtŒ‘ï¡CÓ€ÏGnÍȵ¿ ú_OsCÀ¿ÿÓg9 œ@€¤™¶ÉS{c S£¿Ìb@žÅ€Ìð<ÄÆ €Yÿ.‰A Ð³IŒ‰éOOo²þM;ÿôfÈŧ7333ÿ»æÏkèÆ„åþsç®å'tä–·viJ!7àAÒÍ.܉*çQ ¡ÒÌý—a[“âVƒœÖ“AN?ÔŸ?fr°ð“ï'Eg³¢Ü2h•^6Ø/â¾XD16ÔŽ;§8 Y ýBM¬‚»œ¹îK”›v]‚Ęñ“šæ~55oB$ á[‹Ö’-·ÊQlþ¡we—½×“þÂîlÿåCÿÊÒ õÀ?ÌGT(Ë_¡å…ÇC.<6˜íÌcƒ…€0Ó RÂÍ >y°¶±XOÛÀp˜>yëÓ³ð”Àtaræ3“ÃySƒs$/©a ­io1f°œ:£”…©æSm+%F)!Fm;+øi˜®6ÿÉAääÀ§r–ðœ~¥à=à; „k³œ0ùDÛÒÔÚBSÛÀvò@dqm-}"d ƒ¡` ³ £°…©µ|š§p³ÀL,ÍshÚ3ò?…‡Š¾¦öa>F€Üd\\Œüpià$,Ìç‚â|Hÿïü𚘘ZY*¡1«À >9°ž '¶“ñY ¦“ðäÀ|rNfÌ:™t2 èÄO@'³°œÌÂr2 ËÉ,pçúßô=§àÿsûCA'²€þoœÏ›Èòÿ8«,,ÿGÁçy„ü?ÆãÿÈÛ9Ž˜Ï#ÞÿÇ#òüZ‹íÌr‘ßÔÎz‚ªp2Çä¨ ÏšÌð‹åüHæ_Á¿²±žÂ¬¾•‘6‡ <©2€'GÄ‹ÊÄÄ*ÈÄa¿à ¬ð>¬ˆ÷pôf´³BOÛ¹ŽÓ³•Òq°Ï&è4¡ 3ÁþÁˆü ú|ÇBœE΋(ÈrÙÀ'Ït¡òÚ¿Ð;øo¬§ìž‚>äÐÿWâIg£ë²÷ÿ³þV|N¸i!`ðŸ öħ:`9¯ƒ ú?ÐÁYd`:“A¿ác™ÿ\çù„CòŸ²×)Ÿ'v;“˜¡çÓô?IÌçy> ¬¿õÉü Ø~gµ_‰ù} ~þ{³°ü9iüI– :¿‰¿ r& ÿ&XÏåè_@ÆzÊ5äŒÿq0ØXXÿ.˜. Â…üü/ Áú•BÑÐÓ è—ñŸ²yj°?> ž`1tÆhB¿ ö߉:îW_þ A9Tç@æ"˜\Èþzƨп1ê÷ƒOk3Y‡þøK#Ìç²Ó/í@N5óçòæ·$'å |ù»òæLí'p^Âÿ¤xýL žÑºÀ)Äý+Øüï!òg'f;ñŸŒu¾T;îßÉÂòŸÛä¯ç áß¶ù;{€XÀ¸3ö€²µÇù²á¤åß q&Ý‚Ï¬Ž•Î|ÖõO1Pèô<ËÂ<ÂüWë©cæOè÷.0]('þ‚{Ös`>-Þ˜~«òG¥ÃÌtnµw¾0øwçÕÉrþk%ĺÛíÂZí÷bpfµ¸&!¦¯e©8¹îsra餸œ ,xzщé×EŒ¹YÏ®þÏ5{|åã žõM ùL-´´-Ž—úpážhkZ)±!ld@|Õd`ÙÀ*Œ¼L¡i¥ojÂø”ñÙÄ‹JÏÊÊŒ‘Q fÁ`¤obmÇ c¡­ ß1j™j2ZŒ͌´-õM´´íô¬Œ¨áüeñüeÿŒEÄwS D–DìáÙŸÊÀªò¯ w–èyëüçD„~e…—*ÿÚ—ÏeýŸ‰"ý" 7 Ûßýýg‰ž¿"õŸ…0Á‹¾cRp²p¢lÌ Ð‹Dÿg‰ž¿ìúŸKŠ ôKR(ËßÈù+ëœ% ýß<Á;œÐ)IV&0ð"Ñ?'í3TA篮ýÚ=¦u¬]V +³Ê¿® Î=Íö¿õ˜Ô/a™!  ‹dÿ\‚ž¥Ëü ‹ õ‹.|Ïòwt•ñgéþ vLAë]xè‚ÿŽî¯eÎYºÿ2AëĸðØeýãþ•GýoÈtBAê„(½UþõŠý,Ñÿ ™Nˆ'Šc¢ðØeû¢¿®fœ%ú@¦cR'D¡ ßÈô7WƒÎýßé„(‚Ô1QøoHþº~v–äÿ†Lˆ/˜~¥7KôÃ$„r„~©­,£Þ_ùÏýß áŽp`e!ü…‹Â…ü6ë¿~–ÿ @ ã’…W1 <ÔÁÖß8øï3+鼸ü0+˜‘©îII{ú%#âKò“ÚPé¤L=žÕÄÚÈèdǯPeõµmµ-à%³Ž6âË#mKø”ú–fF0{SÍãzýø $DÈ(im¯'•òémFa&ºTÚ&ôÂ|ÔúΙé¬^ÎÔü,'XÎ-qÕýÌ‚¿W*Çë¦?_Ýùc™þ}u‡òûšõÉÒÎ)¿…6ÌÊÔâ”,ò{µöëÛóÉ÷éZÖšÚ¿úýºvüëÛÄçãUì¯ë§¤™Ϭ!çWÆ¿ÈÃõ.³Ò¦`G܉ȃ´L`J%õ…ÙÁõÆO|`4¦?6xHƒáÉGpÜÁ‡ÂáøŒ à~@è…6+èbÛ…63ä/ÚW³ÎµA—϶ÁyaežïÇÂd:߯2Ø.Œ…²\à¡Ùóm(r~,+ù/úAÎÏd‚Æ2³— Þt^@ó_Ðe§ ïw‘.\õàscY˜ÀÌ,ÚXÙÎÓ`abcºØÊ|^xøŸ×) ¾@È 9o7 ˆíâX0Óű¬ÌÛ ,ÛØX/¶A é?·13A¡Ú˜¬Ú@çíoƒW¾ÚÀç}ÞÆÊvA^¸‰.è™ ¼ gx5tQ÷ ó~°/øÏpëÁçý Þ9«`fÐ_´A/ľf<ïW¬6Ö¿hƒ2Ÿocc:ïWð6èy‡{ßyabƒûËù˜†·]àŽaçýÞÆÆzž6¸ÍÏë ÷|?xÛ‚ÿ¿à»?os8Ðy¿Y/Œ…·×¼ zÞ¯€Ì èùxƒ«å<ÏðЇ2ÿEÛ…ø€,°&ð½ VÖ cYXÙ.à ä/` ëyAbf^ÀgV0ä|\‚XY/Ä^òŸ×)Ât'ámpG¸Ðv!ÁÛÀ,èB çs¼ ¾ /ð|„·˜/ÈÆ¾€ 6ëÅ6èEÝCçs¼…ù]¸;_Ð3Nø¾0]Äl&æ‹ ^—îXgôgeÓ7Ò¶@T[Oõ´uãSSÄeÙÓBMÄDÇpR–!> ”|Bp£²Â០bò €Ùxy…x|‚ðt"$ÈÇ…öï» .rÂëD~=mMCKkc#+/`áñ °ñ ùÁ,l¬l¬l@V´ã{]aVÇ%\V … ¤Ú¥¶ÿÿÜN®H3hêèþ¿"TXXXŽðíܯÎY/Á¡öðR€x Žû` Ë%ÓÿûXú½Y#"¸dÙÕïßÿÿÑ ðôØš¦&:úºÖÇk'€Ó:¦q{>˜¦¡µƒ¥€‰„F†FÑ;>g ³˜ê¬ôô-ûÃÏÀ7¯‰=¼ÍDÓ±Ò¶Àdx}]S m-†_½ž™˜[›Zik¬`–˜‰ÀÒ †ø&f¡ý«3@ÛNSÛÌ ³ØÂWòKm3˜bmiùÇ4‚ÆfVö€“•ñ™‘œ‡¯agNó{Â=€ÃPÛ1-;€C Þë÷ møê] !\ˆþ§=éN;Œaö m¸h–úǾ o¥m 0EÈz,ý)£p Œô-µdj¢è`ljqÒû·":¿f…k鿘€Á™ @¦©wBnA+˜¾ ‚íSUÃÎç %@ÛDÓÈÔ>§¾É/ÆmçŒ ldª3‚ÕÖÔ×Ñ×—<M­­Ì¬­ŽŠÊ’`«§W·&be·-àx ¹´ž°ÌJa+ø`JK½&#=ýIçSÖpj'<±m`ŒðŒgœ›ÞAäÔ™³¤e ×¢ <™kÛ!Ì ouø0!¸Y´íŽ£è/Äü;rð®p3Ÿš0Ä´ÖóX™tµM´ö?ÖBFx¬:ݯ.ðùMmàñei ×Ü®V÷Ђ!þ A  qÊܯè'š¶ü5Âb¶úVVpïÏ# +€™Â½ÆV®E„:OF3œÕ ,í-ŽwÂͱ™¬Ó$%è¦ðq¶úp"’BBµ‚èÈŽ8…AÁœy­ã{¥ !ÞœàÜÛw9ÀÞÑRS‹‰ ~¢ „,ôµà’œFª¦©±1 !‘ ”Ùÿ˜xÌÙ±ÕÎ8€•…¶ö©£"À©£éÀãôÏ}ŠÔ45³?U(Ý/¨Ò6²èX˜ó†píߪÿmNºcõž²I‡˜¦¥uªn¸jZkÃCû˜õá1l} MÚ–VÇ``biea­yð¡JpNÙ>æîØkaffÚp¬M´àâë[ÁÅqÛ̱Hpµ™ÁÆÄ `‚«Se"¦A›µ%€QîfŒðÿp®à!ÙÚH !€þIß?NüVÌ™YàÌHšüá@ÇŽwxFm+MF#ãßcèNœëœrÀhd ³·<ç·aà¿2 ;€ò„}MF-F1xÓÂÍc0¿%#¿%ÂòâÚ––ðÈA@Àøä=‚ˆÖÉåX¸œ:][X›ã+ÌÄþWˆ˜jXþÒ5Ü– „¥ž©í‰~͇ðK˜¥áIøšÁíg 4Aøâ±ó˜Xé›Xc¼4¢ÁÜZßê¢|§lÂ%z¤O€SØÃ]ÑTß„ÓèÂ]ôàèlSÓ_H`yÌç±›ÙÃîSã½<ÅÙèkYÃ{ž‘åð%Ϥx…dâƒ'œã\‡ö“$;3Œá¸0<Åm„ÃÃàZø%'Sç#¸àžñGî<—3„Ð-é_Ú‘”î+aìÖËŠÿEárRNœFœš%"r4áú6Õµ€Ÿ0*ˆäD,¸©tá!tB›ñ·nˆ`'P!Zà™èx¢“ù#Ã~Ï)~RwÀõz¢VÆ_ìHÁHÛòwǧÖô²'Iöí³¤Ï¥x)¸Î̳ý¢ÿ{RÞ“òp±†BÌayjC„ŸÌe¡ 5 mÆZž™æÏ÷ÇFÃÈ`dª 3b´Ôƒ'RÄÿÀ+cx£±©<ðaŒ:ð©tLíiùá¶Òþ%¹ Ý±ÎÈ.t’ÙME.âÔŸÅ5†ÅqåáâžöûoFÌòØé¼ßÁñðoÝî—¼OM`fp„Ã⢕˜>ߟ;¾PrbíãjKG„³åé@¸CÁë1„[œÆ"Mž–•ä¹ãÉárêÿžû¸¼GT§éóLUlõ§ró/ÝýûÇ8ó{Ò“ >“…©œ±§ð‘0 xíq̹–6œ€‘åq¨è[ž/Ó~U(ÇÔ•ÐÚVp°×ýƒÚïäó‡EÏä D"9}œ^N3Ññuœ_BÁá?…Ὺ0àö×?EÉãyORâï®Å ¼n|VxR0ÓƒçxëšÛèkÛ‹+Bgš¬ô¬5LV>n´Ñ±ü{€iÂÓâ.FÄ_™[ê©IÁ븸ˆSð‰5õ­ì-áZ8.×þ7§L`ÖVúFÖ–Ç'6FÄžæO'-á…¸ýé”ǧLÍàkFMÄ4,¿?›šiþšøÏâÈê[XÁ‹>8qqxÄZÎů¶ ýÂ.v ƒ–>Ú_Ä6;ÈDÇÄtü:k?9y)5x´ÑVC‡ÿ”¤Çˤcá£/ÀÉHÒ? ü›´3 ŠßÔžiX›XY#®¾þ+Ÿ±´°a<#óÉ FÄÙ¿ @û]rIi#bBÛÄÒÚâ$žOÆlNfƒ‡þ±úúú' íW¹hf¡ozìð¿â¬'`y}ÄŠí²(ÿ¹ ûÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöÏöØþ_¹ÓÀ]èdar-2.7.17/doc/samples/Makefile.am0000644000175000017520000000250614403564520013554 00000000000000NO_EXE_SAMPLES = darrc_sample sample1.txt README automatic_backup.txt JH-readme.txt JH_dar_archiver.options JH_darrc cluster_digital_readme.txt index.html PN_backup-root.options PN_backup-storage.options Patrick_Nagel_Note.txt EXE_SAMPLES = cdbackup.sh pause_every_n_slice.duc automatic_backup dar_backup dar_rqck.bash JH-dar-make_user_backup.sh cluster_digital_backups.sh dar_par_create.duc dar_par_test.duc MyBackup.sh.tar.gz PN_backup-root.sh PN_backup-storage.sh PN_ftpbackup.sh dar_backups.sh available_space.duc date_past_N_days dist_noinst_DATA = $(NO_EXE_SAMPLES) $(EXE_SAMPLES) dar_par.dcf etc_darrc install-data-hook: $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/samples sed -e "s%SOMEPATH%$(pkgdatadir)/samples%g" '$(srcdir)/dar_par.dcf' > $(DESTDIR)$(pkgdatadir)/samples/dar_par.dcf chmod 0644 $(DESTDIR)$(pkgdatadir)/samples/dar_par.dcf for f in $(NO_EXE_SAMPLES); do $(INSTALL) -m 0644 '$(srcdir)/'"$${f}" $(DESTDIR)$(pkgdatadir)/samples; done for f in $(EXE_SAMPLES); do $(INSTALL) -m 0755 '$(srcdir)/'"$${f}" $(DESTDIR)$(pkgdatadir)/samples; done $(INSTALL) -d $(DESTDIR)$(sysconfdir) sed -e "s%SOMEPATH%$(pkgdatadir)/samples%g" '$(srcdir)/etc_darrc' > $(DESTDIR)$(sysconfdir)/darrc uninstall-local: rm -rf $(DESTDIR)$(pkgdatadir)/samples # $(sysconfdir)/darrc not removed as it may contain system admin specific configuration dar-2.7.17/doc/samples/automatic_backup.txt0000644000175000017520000000651114041360213015563 00000000000000Let describe this automatic tool by its author, Manuel Iglesias: (extracted from email exchanges): ------------------------------------------------------------------------------ To make it easier to use I have written a shell script with the following features: -It assumes all backup files are in an accessible directory. From there the user can copy them to removable media. -Easy to configure different backups: Make a copy of the script and edit 'BACKUP SETUP.' in the new file. Configuration file 'darrc' is not necessary. -Mounts, makes a backup and then un-mounts filesystems: A mounted file system could be an external H.D. where the backup files could be written. -Decides/recommends which backup mode is the most suitable: Create FullBackup, Rewrite FullBacup, Create DiffNN, Rewrite DiffNN, ....... -Fully automatic: Use '-auto' option to use with cron. I have studied my system (I am new to Linux :-(.) and cron only sends mail if files in /etc/cron.{hourly,daily,weekly,monthly} exit with code != 0. I have written some shell scripts to handle cron jobs. -Different backups can use the same 'Destination' directory: Backups are created with base names made up of the shell script name: Shell script 'LoveLettersBackup' creates: -LoveLettersBackupFull.1.dar -....... -LoveLettersBackupFull.N.dar -LoveLettersBackupDiff01.1.dar -....... -LoveLettersBackupDiff01.N.dar -....... -LoveLettersBackupDiffNN.N.dar -LoveLettersBackupDataBase -Creates and keeps updated a Data Base file for later use by dar_manager. The backup mode algorithm is the following: -If there are not FullBackup files then create FullBackup. -If there are not DiffBackup01 files then create DiffBackup01. -If the sum of all DiffBackup files is less than %OfFullBackup (% set in 'BACKUP SETUP.') then rewrite DiffBackup01. -If the sum of all DiffBackup files is greater than %OfFullBackup (% set in 'BACKUP SETUP.') then rewrite FullBackup. -If DiffBackupXX is less than sum(DiffBackup(XX+1)..DiffBackupNN) then rewrite DiffBackupXX. ------------------------------------------------------------------------------ In my last E-mail I forgot to mention another condition which the backup mode algorithm takes in account: NrOfDiffBackups. In the copies of the script I sent you: -If NrOfDiffBackups is greater than MaxNrOfDiffBackups (set in 'BACKUP SETUP.') then rewrite FullBackup. In the meanwhile I have decided it is better to rewrite DiffBackup01 in that situation and I have modified the script accordingly. The backup mode algorithm is now the following: -If there are not FullBackup files then create FullBackup. -If there are not DiffBackup01 files then create DiffBackup01. -If the sum of all DiffBackup files is less than %OfFullBackup (% set in 'BACKUP SETUP.') then rewrite DiffBackup01. -If NrOfDiffBackups is greater than MaxNrOfDiffBackups (set in 'BACKUP SETUP.') then rewrite DiffBackup01. -If the sum of all DiffBackup files is greater than %OfFullBackup (% set in 'BACKUP SETUP.') then rewrite FullBackup. -If DiffBackupXX is less than sum(DiffBackup(XX+1)..DiffBackupNN) then rewrite DiffBackupXX. ------------------------------------------------------------------------------ dar-2.7.17/doc/samples/PN_backup-root.sh0000644000175000017520000000014514041360213014663 00000000000000#!/bin/bash dar -c "/mnt/storage/backup/root_$(date +%Y-%m-%d-%H%M%S)" -B /root/backup-root.options dar-2.7.17/doc/samples/sample1.txt0000644000175000017520000000447014403564520013625 00000000000000#Preface #-------- # #Here follows a sample batch file submited by Henrik Ingo (Thanks Henrik ;-) ). #It is complete for backup but does not use conditional syntax. Over comments #(lines staring with #) all commands can also take place on the command-line. #Thus, this is a nice way to discover DAR's features. # # Denis Corbin ########################################################################### #Execution file for dar (Disc Archiver) #Sipmly use 'dar -B thisfile' to backup #This backs up my home machine #Where to place the backup (somewhere with lots of space) --create /mnt/win_d/darbackups/my_backup #General settings #size of an archive (one slice). 650M fits nicely on CD-R (and RW?) -s 650M #compress using bzip -y #verbose -v #Files not to compress -Z "*.mp3" -Z "*.avi" -Z "*.mpg" -Z "*.mpeg" -Z "*.divx" -Z "*.rm" -Z "*.wmv" -Z "*.wma" -Z "*.asf" -Z "*.ra" -Z "*.gif" -Z "*.jpg" -Z "*.jpeg" -Z "*.png" -Z "*.zip" -Z "*.tgz" -Z "*.gzip" -Z "*.bzip" -Z "*.bzip2" -Z '*.zst' -Z "*.rar" -Z "*.Z" #Define directories to be backed up #First give a root --fs-root / #Then list directories to back up (relative to fs-root) #If none are given, everything under root is backed up #If something is specified, only those are backed up #just/give/path/like/this #Exclude directories/files with the --prune option #--prune not/this -g etc -g var/lib --prune var/lib/rpm --prune var/lib/urpmi var/local var/www var/ftp usr/local -g root --prune root/RPMS --prune root/tmp --prune root/kino --prune root/Desktop/Trash --prune root/Desktop/Roskakori --prune root/.Trash -g home/hingo --prune home/hingo/tmp --prune home/hingo/RPMS --prune home/hingo/kino --prune home/hingo/Desktop/Trash --prune home/hingo/.Trash --prune home/hingo/nobackup #Be sure to add quotes around tricky paths, or why not all paths... "mnt/win_d/My Documents/" -g mnt/win_d/text/ #End of file #Use something like this to restore everything: # dar -x /mnt/win_d/darbackups/SIMSON_backup -R / #something like this to restore something (etc-subtree): # dar -x /mnt/win_d/darbackups/SIMSON_backup -R / etc #And something like this to retrieve a single file to temp # dar -x /mnt/win_d/darbackups/SIMSON_backup -R /tmp/ etc/httpd/conf/httpd2.conf --flat #Really looking forward to having ark support for dar! dar-2.7.17/doc/dar_key.txt0000644000175000017520000001115714764315776012256 00000000000000-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBGHV4FoBEACajECT8wwG6qeC3WpE8N7khbtBCdVnDkPIER2Ho3I5HFiLfE3n R0iELz+knPWJ9NLiksKVwgE/MP2GJjwXqWG1AeEH99jdnroSB8kc4YlMXoAmP6dY 0N+Ovsvjvy+59M7PiKOscR1RMux5461ZYeGXVTPjXeVj1CG9sA4zfPLuBXVuLl9Z 9ubapJZbKFe3CIuvvEEc4H4ApMbaDjFTi6tWpX++xUPzjPs/QOhuD2Q4iw1XdwNZ q5OKdB1tcQq362RjModHcTkUCU2d9x2q7BhRgf8xAXeXwB+2CPYpagZx3iaUkHdV yhxtaHKdMXefM4Nb0j863zAHYz5R2ClmSKihyFIo21c9k4ZtZLEFMIAEetRioGEF FkrTGJO+TJG2cOYmdNXI+FnOSXcRmHmoGR+HPRRZs82C4QKtuUuhsmstkvocm1j0 Yf+BVM1M+vKI0QVxIfQjQGO85+gNCyo+p2T+3P+unjlomm05KXZn5HZsPwF9zfsW 04VH8xHJdSCnpH7swDcG/cojsHGKjV6/JV+7D9yFjVdl2BHF0L41SV3S/f1Qsm+6 rVUVVMbjTLumjvCmR3jvzJ9XrAKCRTfO4ekTkOh3KVyazQuhhfAG+rvP5uQUxBmM if90NLQWDevjF8C6R3wN00wYQtmPJpYb5NUEINxaB/vjW97z1dCBqb6qfQARAQAB tDxEZW5pcyBDb3JiaW4gKGh0dHA6Ly9kYXIubGludXguZnJlZS5mci8pIDxkYXIu bGludXhAZnJlZS5mcj6JAlQEEwEKAD4WIQQb5HYGp08XjHMoQ7BfZFsZFtVlRgUC YdXgWgIbLwUJE0KqAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRBfZFsZFtVl RgpJD/wP2HJMzwOpEhDpsxK3sWO2WE2Muf+iYWI2A3t9jJuyoP0CVz40G24f/O3z 5i8CuheZzVV3koEBweAr0oYtYeLuxZ4cvtQgxZ5JFij3oZiPHYX7uuJ8E9mIORW5 7Y2gLNsjMx1zZnP8UPpBLcV2qI+lc3qLd46giXw28htV4fSR2bWlaf89tsF6Ujoa a2JKDhLUw+RRKB8/I5pYgoxnDTNmOhGIspHCbQOHZnYMGWT5Ttrjd+KXr5KH8gPT Z8ksMzPob8BMF+tzsmvOfl0dmPAlyoFeqyndgz0L7xVWniwWjE3f3+dcNMvhGgnV 8IIzKqG9BKyQCcWtrM8EZR/d+5pSstfDgq4zhi0FQ72jHVhj5lnDnWju/rXc2XK1 d3IDf3OGr7a3Aw0GH8TsnF9TWb31RS1+w/vVOGkaiTIwIF1vJJ5PzSPEwCS03VyZ uVt18GCpVa2izLR0S5EzoWts6zWT7kM+hqOm34uCIDrb/nvByBvcwPdn5mDhM/B7 ShQ9zlJodMmnuP2AWfZfaPx6hS4JdeQ+i3f8AU2SzxxF74CXavasQIr0ltETwtM2 77npkGVvKLqmetETzedflpwgdr10qczvgabqzn8JwAasymxFJskpuui50rtNUJ42 6u5eb1lYMG40XJFFTa4Gxwhat7XhQuP1XYUzMniNySsKl5UryIkCMwQQAQoAHRYh BDsxKa8d3e+lo32BjwgxsL0D2LGCBQJh1eDHAAoJEAgxsL0D2LGChrcP/1RPiw1f DRI/KHlnSrekaV19LBI6ljMONiK/NJTsuUuonDsPe+taCFuksMTWFS6pBzfzBYG1 WFHzCK5TIBgGblyBGadOgJyJkOmzOo0H2xZbO1NG1c7XUFpenkAVIg05e1YdV5uX MjBlpKPs8uSjC9AnJTNQ/7efg8hMCVDcgzJxveoZqueY2/zH7jjPAE+JOgke7kFC Q42caTR5OwqT1tdZeWxhzZmlVbxd3vGw4A4S9pqL7JNkLVJYdBT4YXv0WsZlulBG 0twXWKMmXlFPCilsHAIwzAc+wTSEncD1QHLcBgIqC2WANvXUgi9jvxH7LubZvzKu 4ZfCaORHcmWemmT3IcytjPP4osi9KFSQBvCKXBEZwEuDZR2KGdMfBghRobrV1w4m A1wcEuHav4S7tLVcLOGbYrupbz+K2LoRlNMS///6FVitnllYMLBw357LESURxPR7 DLaPLOrsIlkzebKkCQavjoh8Jc+Ev5+Xv6rXHHBB4jYnOt73MsXhq/7EBIgyZtmG PEcCpjFwtgFAB6BQlxuGbFN5uIaJTrmWzi3+EaOMp5Auggp1ljLBhQKCxjn8AGN/ YJlFIwyF65P30g7XRqDBTKyrqqcGDgjhV/qDpzYapnJMMDvWbiPKlBdxP9qcaaxf LyKuMQrkmOv5jfpaaanYiKhXI0s/pMw0sscauQINBGHV4FoBEAC6IYOXXGGR8f6M iPunM2MuX9g8FpUsGYOhyyzu0BTfX9VosUjE1o4oieA/U4/1JdB6ZSQC73m/EOxN 3MHcDpAMmYac+5lYznQASDvCqvIhX44d1xT4jFTu6vm+x+qAoqoQH1vRop67uKCD Y2rdivBbwZlVzjoi4jKk/0aENEoZz48AW3cpjQAbnTDiL8lH6IjlLhGa0Vs6dE79 pGa/LhEHT7C+qQOZ/ydYybXld07ct9KP5eOvreis8xsRBcpcbcOZgL8O802tkVXd I/IWwzPGbUVABU/78LAulfM1qR5cAkRx9ICayAoi1FYZt+twhebRuhQ/KADTiiRe 5MN4GSGfCji7lpi4GJlX/1mrH+SCOwyw4Lr95OMHau8KMsP6Y07iuLjtHiqpJFAs vXYELVtaX+p07aWMCry5sj9Z++vFzMar/31ouz6tHiVPSrp4eDHj2X5v/VmDqYEy 1i/0uLBmW4nJdscm0qPoANEHHjw6ZNx763ZZzyvfupbEhcnJw1d37tte7bgEOMtA o6fLwxHw4Bdv6jXDXrsYfANADFubwGi+jbJqFuhuWs7rhTcHWO78w8RbOPyc4Llc JcW/vGlYZPreASjQnXS7SPnv/hnWS++KbhtIhaNskULXzddPRmx0ndov5zwjS9Yb 5HIQzw8ib0/C+ncVS/WT8FVOPbOSGwARAQABiQRyBBgBCgAmFiEEG+R2BqdPF4xz KEOwX2RbGRbVZUYFAmHV4FoCGy4FCRNCqgACQAkQX2RbGRbVZUbBdCAEGQEKAB0W IQRV5ISmpcW8f1P39y6osUFg02s7pwUCYdXgWgAKCRCosUFg02s7p+M1D/9FCLqC Mu3YsMFG3ca/qUE6As/GkZ9JYCO4skWB7WazzrIMw/eOqhZLVjtDV3en9uc2DVC2 aeue3ON+qPGLe0KHM5UPWICTDWVFa2heMfQ79YNoOctYcNczt4rHzPijebKXwLTn fLcOJ140lXDQEnuihHCa4tQhoTaJalbPJ6pkAHKA+BTDLahK+AGdHtQcI5XU6mkU qGXHy8pBpeocGJik7tQgazza9I1HrXOo2Pb/8U0tGEKslyAGGLbdR/n0q1e/cYJz XjrIm588GcIgGn4ut9MSrYhtvB9FzSLmRg/xpEy8ZnzX5IwcXU4IoIoJja0q6Ju9 izdC8cR21viWaasM2xOkqPoBDLCQ0TA8ORhkf+f3TGlfTFr5muaVLAcz1ZJggchT XtqbPfaWbAk+3b+2HBDbZmjYiKLP1kx/etN/79exEFyaprOccgw1Ofe4GbPjDj1N /1RQ8NAXzuCaWD22VcwdlAmqIfgqFKeDn6mr7ibD4qONc1+HRJOXL1TuiLSd0IMe kJwuNDEpTmRYvgfWUXAO1a6dA9nPRF6k5MCiUmyzTq/+sSt67kyacwPHj3UwvB4/ EMtZVcaJsQ8DO1bgYUaTqt7XObckjF+TJjL0s8SGV0PmbHpkreFn4IBV+GvQCWGE ORzV0Xk/PicKUFU0dZ1QzKrHOISazkl3wvUE+2I9D/sFAOlwhcgbC+z0AxbZPfpy +aQJ12is5yemhsSM7oWtxvSKd7Ly3RQawPhxkUYyIUQCiy5mVZe22dlQbOMY5TT4 QPvaNmoM5fooVToOLZxpjGCjpddOi3FtVXHD/LZWbdtFD/MiXC1ez2LmuJ8hmxOt 90IOEUz7yI0wk5+kQvYJ4Di8iEkKJyf8s5498NiI6h1azNiQTNur1hpAPWJV558N baSbDuGeN8qQLFYfGNWmC7qrcNxaxDFMxSD5K5XV33AIgmFHXw5lJO4ZUK/n+XKF 6o0wwUeWShWhiApvOIBdnkYqDcJiU7ExqbOhrBjL8wHTLcOSCvdSXxooLVw8wozL Gnanz4N2Xl552HixwQZ8QQwmQCEFIi18Y1LrkMy6tEpbxxSPvwE7qqSwPy/icFwk DuH+OnB1IBvM8z9Gxr1BPWNROA97Kl5l8JGT8hict0ZzlrB9y7ty+ALk1A2ZjfcP 9xnJXUAu3pEoBsxlIH2SXSA4Nuus1M7jHi5q6E8sUirL1a+2T7W62lTQdoFSqc9o Ihh6MgVK5SE4cr2N/HLlh46j1ACIdJ+abQo721MFI3dNCFsWQiXOoWt9ARwM1MVw HciSdrV4cO+nobhOOZ+kXscjESkN+e5Ap34DKeg9JTCTfVPToU2gPdkUFF+hrPia Or3cKenrvvwQ0b/pnPYxGg== =c6Nh -----END PGP PUBLIC KEY BLOCK----- dar-2.7.17/doc/mini-howto/0000755000175000017520000000000014767510034012227 500000000000000dar-2.7.17/doc/mini-howto/dar-differential-backup-mini-howto.en.html0000644000175000017520000012605414041360213022164 00000000000000 DAR differential backup mini-howto -EN-

    DAR differential backup mini-howto -EN-

    Author: Grzegorz Adam Hankiewicz
    Contact: dar@gradha.imap.cc
    Date: 2012-12-19
    Web site:http://gradha.github.com/dar-differential-backup-mini-howto/
    Copyright: This document has been placed in the public domain.
    Translations:From the web site you can get this document in English, Italian and Spanish.

    Introduction

    We all should make backups of our important data. This omnipresent advice is usually ignored by most people. I ignored it too, until I lost a good deal of important data. Not happy enough, I managed to continue loosing data in a few posterior incidents, until I decided that it was enough. Then I browsed Freshmeat for backup solutions allowing differential backup and found DAR.

    A complete backup means that all the files falling under your backup policy will be saved. A differential or incremental backup will contain only the files whose contents have changed since the previous backup, either full or differential.

    DAR allows you to create easily a set of differential backups. The solution I've developed helps me have an automatic backup solution which runs every night. The first day of the month, a full backup is made. The rest of the month, only differential backups are made. In my situation, very few files change from day to day, sometimes the source code of the project I'm hacking on, and always my mailboxes.

    The result is that I can restore the contents of my computer to a specific day with ease, if I ever need to. DAR is a command line program, and it can get slightly complex with a few options. This little mini-howto will explain my custom solution, which is very crude, but works fine for me. Yes, I've actually tested restoring the data from the backup. In fact, during the end of the year 2003 I moved to another country and I took just one CD ROM with me plus a bootable Knoppix, and I recovered the exact state of my Debian installation in a few hours. No customizing, no long installations, no missing files.

    This document was written using version 1.3.0 of DAR. When I updated to DAR 2.0.3, everything kept working, I didn't even have to update my backup archives. So it looks like the interface and backup format are pretty stable, or at least backwards compatible. However, don't take everything said here for granted. Verify that the version of DAR you have installed works as expected and you can restore from the generated backup before you have to rely on it.

    This version of the text uses reStructuredText (that's what the weird markup in the text version is for). See http://docutils.sourceforge.net/ for more information.

    Simple DAR usage

    DAR is very similar to tar in the number of options it has: there's plenty for every need, but way too much for beginners to handle. As usual, you can always get help from the program typing dar -h or man dar after you have installed it. Like tar, there's a set of mandatory switches which define the type of operation you are doing (create, extract, list, etc), and a set of switches which affect the selected option. Just for the sake of it, imagine that you want to backup one folder of your home directory. You would write something like this:

    dar -c backup_file_without_extension -g file1 -g file2 ... -g fileN
    

    The output should be similar to the following:

    $ dar -c my_backup_file -g safecopy.py/ -g translate_chars.py/
    
    
     --------------------------------------------
     15 inode(s) saved
     with 0 hard link(s) recorded
     0 inode(s) not saved (no file change)
     0 inode(s) failed to save (filesystem error)
     4 files(s) ignored (excluded by filters)
     0 files(s) recorded as deleted from reference backup
     --------------------------------------------
     Total number of file considered: 19
    $ ls
    mailbox_date_trimmer/  my_backup_file.1.dar  sdb.py/
    mailbox_reader/        safecopy.py/          translate_chars.py/
    

    As you will notice, DAR will add a number and extension to your name. The purpose of the extension is clear, it helps to know visually that the file is a DAR backup. The number is called a slice, and this is related to DAR's built-in feature of splitting a backup over several media. If for example you wanted to make a backup to CD ROM, but your directories are bigger than the capacity of one CD ROM, you can tell DAR to split the archive across as many files as needed, which you can later burn to several units.

    Would you like to recover that backup? Pretty easy, type the following:

    $ mkdir temp
    $ cd temp
    $ dar -x ../my_backup_file
    file ownership will not be restored as dar is not run as root.
    to avoid this message use -O option [return = OK | esc = cancel]
    Continuing...
    
    
     --------------------------------------------
     15 file(s) restored
     0 file(s) not restored (not saved in archive)
     0 file(s) ignored (excluded by filters)
     0 file(s) less recent than the one on filesystem
     0 file(s) failed to restore (filesystem error)
     0 file(s) deleted
     --------------------------------------------
     Total number of file considered: 15
    $ ls
    safecopy.py/  translate_chars.py/
    

    The backup strategy

    The first step to create a good backup is to determine what parts of your system need one. This doesn't necessarily mean that you can't create a full backup, but most likely splitting it in at least two parts is going to help DAR (or any backup tool) a lot.

    My home system consists of two hard disks. The first hard disk is split into a 3.8 GB partition where my complete system lives, and another partition of 11 GB where all my music and other temporary files are stored, like a local Debian package repository I make for myself. The second hard disk has a 9.4 GB partition and its only purpose is to serve as backup of the primary disk. I have no interest in backing up my music, because I have all the original CDs lying around and have scripts to re-ogg them.

    From the 3.8 GB I want to backup, usually between 1.3 and 1.5 GB are always empty. I will split logically the used 2.3 GB into system and home directories (at the moment of writing this my home is 588 MB). The reason for this split is that as a normal user, I can only change my home directory and other files from the partitions I won't be backing up. Meanwhile the system part of the partition remains pretty stable and unmodified because I rarely (un)install software. In fact, from my home directory the only things changing usually will be my Mail folder and projects, where I put documents like this one and other software I write/hack.

    The basic distinction between home directories and system can be useful in organizations too. If you work for a university, usually all machines will have the same system configuration but depending on the machine their homes will have different data. You can make a system backup of a single machine, and home backups of each computer. Another common configuration is having a centralized server which exports home directories with NFS. Here you only have to backup the server. If you have users with high privileges, leave them the task of doing the system backup of their own machines, the exported home is something they can ignore because it will be done at the server machine.

    Once you've decided what to backup, you want to decide how to configure DAR for the backups. You can use switches or configuration files. Switches are OK when you don't have many options. Configuration files are better when you want to make different complex inclusion/exclusion rules of what files you want to backup, and more importantly, you can use comments to document the switch, stating for example the reason why you included this or that directory. This can be useful if you come back several months later and you wonder why all those options are there.

    For my setup, I'll be running the DAR commands inside shell scripts called periodically by cron (Setting up some scripts to automate the process), so I don't mind having long command lines, and this very same document serves for the purpose of documenting the scripts. If you prefer configuration files, read DAR's documentation to find out how to use them and the format they use.

    Making a full backup with DAR

    Here is the full command line I'll be using for my system backup, running as root. Don't worry about the high number of switches, I'll go on describing the purpose of each of them:

    dar -m 256 -y -s 600M -D -R / -c `date -I`_data -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner -P cdrom
    
    • -m 256

      DAR can compress your backup. The compression is applied to individual files, and it can be bad for small files. By default files with 100 bytes or less won't be compressed. With the -m switch I increase this to 256, which seems to work better for all those little configuration files lying under /etc/ and /home. As you see this is a totally optional switch, basically for tuning freaks like me.

    • -y [level]

      This option activates Bzip2 archive compression, which by default is turned off. You can even specify a numeric compression level, which goes from 0 (no compression) to 9 (best compression, slow processing). Bzip2 by default uses 6, which is the best speed/compression ratio for most files. I don't specify compression level, 6 is fine for me.

    • -s 600M

      Here comes DAR's slice feature. The specified size of 600 Megabytes is the maximum file size DAR will create. If your backup is bigger, you will end up with different backup files each with a slice number before the file extension, so you can save each file to a different unit of your backup media (floppies, zip, CDROM, etc). My backups are much smaller than this size, and I keep this switch just to be safe if I happen to create a big file in my home directory and forget to delete it. If this switch is useful for you, check DAR's manual for the -S switch too.

    • -D

      Stores directories excluded by the -P option or absent from the command line path list as empty directories. This is helpful when you are recovering a backup from scratch, so you don't have to create manually all the excluded directories.

    • -R /

      Specifies the root directory for saving or restoring files. By default this points to the current working directory. We are doing a system backup here, so it will be the root directory.

    • -c `date -I`_data

      This is the mandatory switch I talked of before, and it means to create a backup archive. For those who don't understand what follows, `date -I` is the shell's back tick expansion. In short, date -I will provide a date as YYYY-MM-DD format. With back ticks and used as a parameter, the output of the command will be used as a string of the parent command. This way you can create backup archives with the creation date embedded in the name. If you still don't understand what I'm talking about, try to run the following from the command line:

      echo "Today's date is `date -I`"
      
    • -Z file_pattern

      Using normal file name globing you can specify patterns of files you want to store in your archive without compression. This only has sense if you use the -y switch. Compressing compressed files only yields bigger files and wasted CPU time.

    • -P relative_path

      With this switch you tell DAR which paths you don't want to store in your backup archive. Here you want to put the home directory (I'm the only user on this machine, there are a few more, but they are for testing/system purpose), system directories which aren't really physical files like proc, other drives you may have mounted under mnt (most notably the drive you are putting the backup file), etc, etc. Note that the paths you specify must be relative to the path specified by the -R switch.

    That wasn't so hard. Check DAR's manual page for more useful switches you might want to use. And here's the command line I'll be running as a plain user inside my home directory:

    dar -m 256 -y -s 600M -D -R /home/gradha -c `date -I`_data \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer
    

    Nothing new under the sun. As you see, most of the command line is identical to the other one, I only change the name of the directories I want to exclude with -P and the root directory with the -R switch.

    Making differential backups with DAR

    Once you have a full backup you can create a differential backup. The first differential backup has to be done using the full backup as reference. The following differential backups use the latest differential backup as reference. Here's the command line for a system differential backup:

    dar -m 256 -y -s 600M -D -R / -c `date -I`_diff -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner -P cdrom \
       -A previous_backup
    
    • -c `date -I`_diff

      I only change the name of the file, cosmetic purpose.

    • -A previous_backup

      This new switch is used to tell DAR where is to be found the previous backup so it can create a differential backup instead of a full backup. The only thing you have to take care of is that you don't specify slice neither extension in the file name, otherwise DAR will make you an interactive question at the command line.

    The user command line is exactly the same. Here it is for completeness:

    dar -m 256 -y -s 600M -D -R /home/gradha -c `date -I`_diff \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer -A previous_backup
    

    DAR has another nice feature we don't use here: catalogues. When you create a backup archive with DAR, internally it contains the data plus a catalogue. This catalogue contains information about what files were saved, their dates, their compressed size, etc. You can extract the catalogue and store it separately. Why would you want to do this? To set up networked differential backups.

    In order to create a differential backup, you need to provide the previous backup so DAR can decide which files have changed or not. Doing this can be expensive in bandwidth if you work with a network. Instead, after you create a backup, you can extract the catalogue and send it to the machine doing the backups. Next time, you can use this file with the -A switch, and it will all work as if the complete file was there.

    This can be also useful if you use slices, because the catalogue is created from the first and last slice. It's more comfortable to pass a single file to the backup command rather than having to carry the disks of your previous backup with you.

    Setting up some scripts to automate the process

    As said before, now it's the time to put our backup solution under cron. Place the following executable script for system backup under /root/dar_backup.sh:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner \
       -P cdrom -P var/backups > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Some things to notice:

    • DIR is the variable which holds the destination directory.
    • FILE will hold the path to today's backup file.
    • I use full paths for the commands because my root account doesn't have all of them included in the default environment. This is potentially a security risk. Ideally you would like to compile DAR as root and keep your binaries where you make them so nobody can touch them. And run Tripwire over them too.
    • DAR generates statistics after each run. We don't want them in our cron because it will generate unnecessary mail. Only stdout is redirected to /dev/null. Errors will be reported and a mail generated if something goes wrong.
    • The last two find commands are optional. I use them to change file ownership to a normal user, which will later create the backup. Again, another security risk. root should backup that from root, and users should backup their stuff. But with a mono user system, I don't care. If some intruder is good enough to go through my firewall and account passwords to take a look at my backups, I'm already screwed.

    Now place the following nearly identical script for differential backups under /root/dar_diff.sh:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_diff
    PREV=`/bin/ls $DIR/*.dar|/usr/bin/tail -n 1`
    /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp -P mnt \
       -P dev/pts -P proc -P floppy -P burner -P cdrom \
       -P var/backups -A ${PREV%%.*} > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    The only two changes are the addition of the -A switch and the generation of the PREV variable with a complicated command line. Let's see what this command line does:

    • First the ls command creates a list of the files with .dar extension in the backup directory. This output is piped to the next command.
    • By default ls displays files alphabetically. tail is used to get the last file with the -n 1 switch, which says to display only the last line.
    • DAR wants to operate on filenames without slice number and extension. This means that if we don't get rid of the tail, DAR will stop the operation and ask an interactive question to the user, defeating the purpose of automation. We separate the complete filename with a Bash feature called parameter expansion. There are several possible expansions, you can type man bash to see all of them. The one using %% will remove the longest tailing pattern that matches whatever goes after the %%. The result is the base name we want to pass DAR.

    We only have to put these two scripts under cron control. This is what we have to type after crontab -e:

    15 0 2-31 * * ./dar_diff.sh
    15 0 1    * * ./dar_backup.sh
    

    Look up in man -S 5 crontab the syntax of the command. In short, those two lines tell cron to run the scripts 15 minutes past midnight. dar_backup.sh will be run only the first day of the month. The other script will be run all the other days.

    Here are the backup scripts for your users. They are the same, changing only switches to the DAR command and paths:

    #!/bin/bash
    # dar_backup.sh
    
    DIR=/var/backups/gradha
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    /usr/local/bin/dar -m 256 -y -s 600M -D -R /home/gradha -c $FILE \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chmod 400 \{\} \;
    
    #!/bin/bash
    # dar_diff.sh
    
    DIR=/var/backups/gradha
    FILE=${DIR}/`/bin/date -I`_diff
    PREV=`/bin/ls $DIR/*.dar|/usr/bin/tail -n 1`
    /usr/local/bin/dar -m 256 -y -s 600M -D -R /home/gradha -c $FILE \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.zip" \
       -P instalacion_manual -P Mail/mail_pa_leer \
       -A ${PREV%%.*} > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chmod 400 \{\} \;
    

    Don't forget to add the required crontab entries for your user pointing to the appropriate path.

    Recovering your backup to a clean machine

    When the time comes to restore your backup, depending on what you saved you will have a full backup of one month plus differential backups up to the last time you managed to make. The restoration process is very simple, it's the same as described on the first chapter (Simple DAR usage), only you have to do it first for the full backup, and then for the differential ones. This can be boring, so here's another shell script you can save with your backup files:

    #!/bin/bash
    
    if [ -n "$3" ]; then
       CMD="$1"
       INPUT="$2_data"
       FS_ROOT="$3"
       $CMD -x "$INPUT" -w -R "$FS_ROOT"
       for file in ${INPUT:0:8}*_diff*; do
          $CMD -x "${file:0:15}" -w -R "$FS_ROOT"
       done
       echo "All done."
    else
       echo "Not enough parameters.
    
    Usage: script dar_location base_full_backup directory
    
    Where dar_location is a path to a working dar binary, base_full_backup
    is a date in the format 'YYYY-MM-DD', and directory is the place where
    you want to put the restored data, usually '/' when run as root."
    fi
    

    The script is pretty self explicative. The only things you would care is the -w switch, which tells DAR to overwrite found files. This is necessary for differential backups. Oh, and place the script in the same directory where you put your backup files. Here's an usage example:

    ./recover.sh /usr/local/bin/dar 2003-10-01 /tmp/temp_path/
    

    Try to run that as a normal user with a few of your backup files. You can put the result in a temporary directory, so the nice thing is you don't have to wipe your hard disk to test it.

    Adding checks to the backup scripts

    Denis Corbin suggests that the scripts creating the backups could verify the exit status of the DAR command. For the purpose of these very simple scripts this is not critical because DAR itself will bail out with an error message, and cron will report any output through mail (something which doesn't happen if everything goes right).

    However, testing the exit status can be useful if you are testing the scripts interactively and want to know which commands are executed:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    if /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
          -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
          -P mnt -P dev/pts -P proc -P floppy -P burner \
          -P cdrom -P var/backups > /dev/null ; then
       if /usr/local/bin/dar -t $FILE > /dev/null ; then
          echo "Archive created and successfully tested."
       else
          echo "Archive created but test FAILED."
       fi
    else
       echo "Archive creating FAILED."
    fi
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    You can test this version easily running the script and killing the DAR process from another terminal or console with killall dar. That will force the termination of the DAR process and you will see that one of the failure branches is reached in the backup script.

    Another possible use of testing the status code could be to remove incomplete archives from the hard disk if something went wrong, trigger additional external commands when something fails, or avoid testing the created archive when you know that the first command already failed. The latter can be done easily concatenating both the creation and testing commands with && in a single line. That will tell the shell to run both commands as a sequence and avoid running the second if the first failed.

    However, if a power failure happens in the middle of a backup, this version of the script would still leave dangling invalid archives. To prevent this you could enhance the script to do a positive verification. This means creating the backup in a temporary directory along with a *.valid file if the successful branch of the script is reached.

    With this strategy, another cron script monitoring the directory where the temporary backups are placed would move to the final backup directory those archives which have a *.valid file, deleting all other whose last modification timestamp is older than one hour.

    Ideas for the future

    I'm not going to implement these soon, because I'm very lazy, but if you are one of those hyperactive hackers, here are some things which would be nice:

    • Unify both the main and differential scripts into a single one, so if the script is run and there is no main backup for the current month, the main backup will be created. Useful if your machine happens to be down during the time the monthly backup is done.

    • Upgrade the scripts to generate daily a CDROM image with cdrecord and burn it automatically to a rewritable disc placed in your machine. So if your whole hard disk is trashed, you still have the last backup on removable media. Of course, this is limited and cannot be automated if your backup spans more than one CDROM. Do the same for ZIP/JAZZ/whatever you have.

    • Integration of generated backups with a mini Knoppix bootable distribution. Or any other floppy distribution which can be booted from CDROM. So you have a recovery CDROM with tools to format your hard disk, and near it you have a fresh backup to restore a working machine.

    • Synchronisation of backup directories through Internet with remote hosts. Even if the whole machine is burnt physically along with your house, you have up to date backups somewhere else. Could be done easily with programs like rsync through ssh running in a cron job.

    • Factor common parameters into a separate file and include it from your scripts using DAR's -B switch. For instance:

      $ cat > /var/backups/system/common.dcf
      -m 256 -y -s 600M -D -R / -Z "*.gz" -Z "*.bz2" -Z "*.zip" \
      -Z "*.png" -P home/gradha -P tmp -P mnt -P dev/pts \
      -P proc -P floppy -P burner -P cdrom -P var/backups
      

      Later on you could use this in the script:

      DIR=/var/backups/system
      FILE=${DIR}/`/bin/date -I`_data
      # Commands
      /usr/local/bin/dar -B ${DIR}/common.dcf -c $FILE > /dev/null
      /usr/local/bin/dar -t $FILE > /dev/null
      /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
      

      Which you can reuse in the differential version too!

    In fact, clever people out there have already started making such scripts for themselves and are not afraid to share them. To avoid cluttering this mini-howto I'm going to store them as-is at my web page: https://github.com/gradha/dar-differential-backup-mini-howto/tree/master/contrib.

    Feel free to send me your own improvement and I'll add it to the directory. Whether you are sending a single script file or .tar.gz with a whole backup suite, please add a simple .txt file which I'll put near the file, so people can read what the file does before downloading. Please use English in your description, and don't forget to put your name and email so people can send you bugfixes or improvements!

    The end

    And that's the whole magic. If you have problems, something is unclear or wrong (which is worse), drop me an email. If you find this document useful and want to translate it, send me a translation of the file source.en.txt so I can distribute it along this version and users can find easily their localized version. Talking about locations, you should be able to get the source of this document from my personal home page (link at the beginning of the document).
    Enjoy!
    dar-2.7.17/doc/mini-howto/dar-differential-backup-mini-howto.es.html0000644000175000017520000013426414041360213022173 00000000000000 DAR differential backup mini-howto -ES-

    DAR differential backup mini-howto -ES-

    Author: Grzegorz Adam Hankiewicz
    Contact: dar@gradha.imap.cc
    Translator:Grzegorz Adam Hankiewicz
    Date: 2012-12-19
    Web site:http://gradha.github.com/dar-differential-backup-mini-howto/
    Copyright: Este documento está bajo dominio público.
    Translations:De la página web puede obtener este documento en inglés, italiano y español.

    Introducción

    Todos deberíamos hacer copias de seguridad de nuestros datos importantes. Este consejo omnipresente es habitualmente ignorado por la mayoría de las personas. Yo lo ignoré también, hasta que perdí una buena cantidad de datos importantes. Insatisfecho, continué perdiendo datos en algunos incidentes posteriores, hasta que decidí que era bastante. Entonces busqué programas de copias de seguridad en Freshmeat que permitiesen hacer copias de seguridad diferenciales y encontré DAR.

    Una copia de seguridad completa significa que todos los ficheros bajo su política de seguridad serán guardados. Una copia de seguridad diferencial o incremental, sólo contendrá aquellos ficheros cuyos contenidos han cambiado desde la copia de seguridad anterior, ya sea esta completa o diferencial.

    DAR le permite crear de forma sencilla un conjunto de copias de seguridad diferenciales. El método que he desarrollado me ayuda a tener copias de seguridad automáticas que se ejecutan cada noche. El primer día del mes, se realiza una copia de seguridad completa. El resto del mes, sólo se realizan copias de seguridad diferenciales. En mi situación, muy pocos ficheros cambian de un día a otro, algunas veces el código fuente del proyecto en el que estoy trabajando, y siempre mis buzones de correo.

    El resultado es que puedo recuperar el contenido de mi ordenador a un día específico con facilidad, en caso de necesitarlo. DAR es un programa de línea de comando, y puede hacerse ligeramente complejo con algunas opciones. Este pequeño mini-howto le explicará mi solución personal, que es muy cruda, pero me da buenos resultados. Si, he verificado que puedo recuperar datos de las copias de seguridad. De hecho, a finales del año 2003 me trasladé a otro país y solamente llevé conmigo un CD ROM con una Knoppix autoarrancable, y recuperé el estado exacto de mi instalación Debian en cuestión de horas. Sin personalizaciones, sin largas instalaciones, sin ficheros perdidos.

    Este documento fue escrito usando la versión 1.3.0 de DAR. Cuando me actualicé a DAR 2.0.3, todo seguía funcionando, ni si quiera tuve que actualizar mis archivos de copias de seguridad. Así que parece que la interfaz y el formato de copias de seguridad son bastante estables, o al menos compatibles hacia atrás. No obstante, no confíe a ciegas en este documento. Verifique que la versión de DAR que tiene instalada funciona como espera y que puede recuperar una copia de seguridad generada antes de tener que depender de ella.

    Esta versión del texto usa reStructuredText (para eso son las marcas extrañas en la versión en modo texto). Lea más sobre esto en http://docutils.sourceforge.net/.

    Uso simple de DAR

    DAR es muy similar a tar en el número de opciones que tiene: hay suficiente para cada necesidad, pero demasiadas para un novato. Como es habitual, siempre puede obtener ayuda del programa tecleando dar -h o man dar tras su instalación. Al igual que tar, hay un conjunto de parámetros obligatorios que definen el tipo de operación que va a realizar (crear, extraer, listar, etc), y un conjunto de parámetros que afectan la opción seleccionada. Simplemente por probar, imagínese que quiere realizar una copia de seguridad de su directorio home. Escribiría algo así:

    dar -c fichero_sin_extension -g file1 -g file2 ... -g fileN
    

    La salida debería ser similar a esto:

    $ dar -c mi_copia -g safecopy.py/ -g translate_chars.py/
    
    
     --------------------------------------------
     15 inode(s) saved
     with 0 hard link(s) recorded
     0 inode(s) not saved (no file change)
     0 inode(s) failed to save (filesystem error)
     4 files(s) ignored (excluded by filters)
     0 files(s) recorded as deleted from reference backup
     --------------------------------------------
     Total number of file considered: 19
    $ ls
    mailbox_date_trimmer/  mi_copia.1.dar        sdb.py/
    mailbox_reader/        safecopy.py/          translate_chars.py/
    

    Tal y como se habrá dado cuenta, DAR añade un número y extensión a su nombre. El propósito de la extensión es claro, ayuda a saber visualmente que el fichero es una copia de seguridad de DAR. El número es un trozo, y está relacionada con la característica de DAR de repartir la copia de seguridad en varios dispositivos de almacenamiento. Si por ejemplo quisiese hacer una copia de seguridad en CD ROM, pero sus directorios son mayores que la capacidad de uno, puede decirle a DAR que reparta el archivo en tantos ficheros como sea necesario, que luego puede grabar en varios CD ROMs.

    ¿Quiere recuperar su copia de seguridad? Muy sencillo, teclee lo siguiente:

    $ mkdir temp
    $ cd temp
    $ dar -x ../mi_copia
    file ownership will not be restored as dar is not run as root.
    to avoid this message use -O option [return = OK | esc = cancel]
    Continuing...
    
    
     --------------------------------------------
     15 file(s) restored
     0 file(s) not restored (not saved in archive)
     0 file(s) ignored (excluded by filters)
     0 file(s) less recent than the one on filesystem
     0 file(s) failed to restore (filesystem error)
     0 file(s) deleted
     --------------------------------------------
     Total number of file considered: 15
    $ ls
    safecopy.py/  translate_chars.py/
    

    La estrategia de copias de seguridad

    El primer paso para crear una buena copia de seguridad es determinar qué partes de su sistema necesitan una. Esto no significa necesariamente que no puede crear una copia de seguridad completa, sólo que repartir la copia en al menos dos partes puede ayudar mucho a DAR (y cualquier otra herramienta de copias de seguridad).

    Mi sistema en casa se compone de dos discos duros. El primero está partido en una partición de 3.8 GB donde vive mi sistema completo, y otra partición de 11 GB donde almaceno mi música y otros ficheros temporales, como un repositorio local de paquetes Debian que hago para mí mismo. El segundo disco duro tiene una partición de 9.4 GB cuyo único propósito es servir de copia de seguridad del disco primario. No tengo interés en realizar copias de seguridad de mi música, porque tengo todos los CDs originales y scripts para recomprimirlos en formato ogg.

    De las 3.8 GB que quiero hacer copia de seguridad, normalmente entre 1.3 y 1.5 GB están vacías. Repartiré las 2.3 GB usadas a nivel lógico entre directorios de sistema y home (en el momento de escribir esto, mi home ocupa 588 MB). La razón de esta separación es que como usuario normal sólo puedo cambiar cosas en mi directorio home y otros ficheros de las particiones que no hago copias de seguridad. Mientras, la parte sistema de la partición es bastante estable y no se modifica porque (des)instalo software muy de vez en cuando. De hecho, de mi directorio home las únicas cosas que cambian normalmente son mis directorios Mail y projects, donde pongo este documento y otro software que escribo/hackeo.

    La diferenciación básica entre directorios home y de sistema también puede ser útil en organizaciones. Si trabaja para una universidad, normalmente todas las máquinas tendrán la misma configuración de sistema, pero dependiendo de la máquina sus directorios home contendrán datos diferentes. Puede hacer un a copia de seguridad de sistema de una sola máquina, y copias de seguridad del home de cada máquina. Otra configuración común es tener un servidor central que exporta los directorios home por NFS. Aquí sólo tiene que hacer copia de seguridad del servidor. Si tiene usuarios con privilegios altos, déjeles la tarea de hacer una copia de seguridad de sistema de sus propias máquinas, el directorio home exportado es algo que pueden ignorar dado que será realizado en el servidor.

    Una vez haya decidido qué quiere guardar en su copia de seguridad, debe decidir cómo configurar DAR. Puede usar parámetros o ficheros de configuración. Los parámetros están bien cuando no tiene muchas opciones. Los ficheros de configuración son mejores cuando quiere añadir complejas reglas de inclusión/exclusión de ficheros, y además, puede usar comentarios para documentar los parámetros, indicando por ejemplo la razón por la que incluye tal o cual directorio. Esto puede ser útil si vuelve dentro de unos meses y se pregunta qué hacen todas estas opciones.

    Con mi configuración, ejecutaré comandos DAR desde scripts shell llamados periódicamente por cron (Configurando algunos scripts para automatizar el proceso), así que no me importa tener largas líneas de comando, y este mismo documento tiene doble propósito para documentar esos scripts. Si prefiere ficheros de configuración, lea la documentación de DAR para aprender su formato y cómo usarlos.

    Copia de seguridad completa con DAR

    Aquí está la línea de comando completa que usaré para mi copia de seguridad de sistema, ejecutada como root. No se preocupe por el gran número de parámetros, iré describiendo su propósito uno a uno:

    dar -m 256 -y -s 600M -D -R / -c `date -I`_data -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner -P cdrom
    
    • -m 256

      DAR puede comprimir su copia de seguridad. La compresión se aplica a ficheros individuales, y puede ser perjudicial para pequeños ficheros. Por defecto los ficheros con 100 bytes o menos no serán comprimidos. Con el parámetro -m incremento este valor a 256, el cual parece funcionar mejor para esos pequeños ficheros de configuración que se almacenan en /etc/ y /home. Como puede ver, esta opción es completamente opcional, básicamente para fanáticos del ajuste como yo.

    • -y [nivel]

      Esta opción activa la compresión Bzip2 del archivo, que por defecto está desactivada. Incluso puede especificar un nivel numérico de compresión, que va de 0 (no compresión) hasta 9 (mejor compresión, procesado lento). Bzip2 por defecto usa 6, que es la mejor relación velocidad/compresión para la mayoría de los ficheros. Yo no uso nivel de compresión, el 6 me va bien.

    • -s 600M

      Aquí está la característica de DAR de trocear. El tamaño especificado de 600 Megabytes es el tamaño máximo de fichero que DAR creará. Si su copia de seguridad es mayor, obtendrá varios ficheros de copia de seguridad, cada uno con su número de trozo antes de la extensión del fichero, para que pueda salvar cada uno en una unidad diferente de almacenamiento (disquetes, zip, CDROM, etc). Mis copias de seguridad son mucho más pequeñas que este tamaño, y mantengo este parámetro sólo por si acaso se me ocurre crear un fichero grande en mi directorio home y olvido borrarlo. Si este parámetro le resulta útil, lea también en el manual de DAR sobre el parámetro -S.

    • -D

      Almacena directorios como directorios vacíos aquellos excluidos por la opción -P o aquellos ausentes en la línea de comando como parámetros. Esto es útil cuando recupera una copia de seguridad desde cero, para que no tenga que crear manualmente todos los directorios que fueron excluidos.

    • -R /

      Especifica el directorio raíz para salvar o recuperar ficheros. Por defecto esto apunta al directorio de trabajo actual. Estamos realizando una copia de seguridad de sistema, así que apuntará al directorio raíz.

    • -c `date -I`_data

      Este es uno de los parámetros obligatorios de los que hablé antes, y significa crear una copia de seguridad. Para aquellos que no entienden lo que sigue, `date -I` es la expansión de comillas de la shell de línea de comando. En pocas palabras, date -I proporcionará la fecha en formato AAAA-MM-DD. Con comillas y usado como parámetro, la salida del comando será usada como cadena del comando padre. De este modo puede crear copias de seguridad con la fecha de creación empotrada en el nombre. Si todavía no sabe de lo que hablo, intente ejecutar lo siguiente desde la línea de comando:

      echo "La fecha de hoy es `date -I`"
      
    • -Z patrón_fichero

      Usando las reglas normales de meta caracteres en ficheros puede especificar patrones de ficheros que quiere almacenar en la copia de seguridad sin compresión. Esto sólo tiene sentido si usa el parámetro -y. Comprimir ficheros comprimidos únicamente crea ficheros mayores y malgasta tiempo de la CPU.

    • -P ruta_relativa

      Con este parámetro le dice a DAR qué rutas no quiere almacenar en su copia de seguridad. Aquí posiblemente quiere poner el directorio home (soy el único usuario de la máquina, hay algunos más, pero con el propósito de pruebas/sistema), directorios de sistema que no son realmente ficheros físicos como proc, otras unidades que pueda tener montadas bajo mnt (destacando la unidad donde va a poner la copia de seguridad), etc, etc. Tenga en cuenta que las rutas que especifique aquí deben ser relativas a la ruta especificada por el parámetro -R.

    Eso no fue tan difícil. En el manual de DAR puede leer sobre más parámetros que pueda querer usar. Y aquí está la linea de comando que ejecutaré como usuario dentro de mi directorio home:

    dar -m 256 -y -s 600M -D -R /home/gradha -c `date -I`_data \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer
    

    Nada nuevo bajo el sol. Como puede ver, la mayoría de la línea de comando es idéntica a la anterior, únicamente cambio el nombre de los directorios que quiero excluir con -P y el directorio raíz con el parámetro -R.

    Haciendo copias de seguridad diferenciales con DAR

    Un vez tenga una copia de seguridad completa puede crear una copia de seguridad diferencial. La primera copia de seguridad diferencial debe ser realizada usando la copia de seguridad completa como referencia. Las siguientes copias de seguridad diferenciales usan la última copia de seguridad diferencial como referencia. Aquí está la línea de comando para una copia de seguridad diferencial de sistema:

    dar -m 256 -y -s 600M -D -R / -c `date -I`_diff -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner -P cdrom \
       -A copia_previa
    
    • -c `date -I`_diff

      Sólo cambio el nombre del fichero, por razones cosméticas.

    • -A copia_previa

      Este nuevo parámetro se usa para decirle a DAR dónde puede encontrar la copia de seguridad anterior para que pueda crear una copia de seguridad diferencial en lugar de una completa. La única cosa con la que debe tener cuidado es no especificar ni trozo ni extensión en el nombre del fichero, de lo contrario DAR le realizará una pregunta interactiva en la línea de comando.

    La línea de comando de usuario es exactamente igual. Aquí está:

    dar -m 256 -y -s 600M -D -R /home/gradha -c `date -I`_diff \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer -A copia_previa
    

    DAR tiene otra buena característica que no usamos: catálogos. Cuando crea una copia de seguridad con DAR, internamente contiene todos los datos más un catálogo. Este catálogo contiene información sobre qué ficheros fueron guardados, sus fechas, su tamaño comprimido, etc. Puede extraer un catálogo y almacenarlo por separado. ¿Para qué querría hacer esto? Para configurar copias de seguridad diferenciales por red.

    Para poder crear una copia de seguridad diferencial, necesita proporcionar a DAR la copia de seguridad previa para que pueda decidir qué ficheros han cambiado. Realizar esto puede consumir mucho ancho de banda en una red. En su lugar, tras crear la copia de seguridad, puede extraer el catálogo y enviarlo a la máquina que realiza las copias de seguridad. La siguiente vez, puede usar este fichero con el parámetro -A, y funcionará como si el fichero completo estuviese ahí.

    Esto también puede ser útil si usa trozos, porque el catálogo se crea a partir del primer y último trozo. Es mucho más cómodo usar un solo fichero con el comando de copia de seguridad en lugar de tener que llevar consigo los discos de la copia de seguridad anterior.

    Configurando algunos scripts para automatizar el proceso

    Tal y como se mencionó anteriormente, es hora de configurar las copias de seguridad bajo cron. Ponga el siguiente script ejecutable para copias de seguridad de sistema bajo /root/dar_backup.sh:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner \
       -P cdrom -P var/backups > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Algunas cosas a destacar:

    • DIR es la variable que contiene el directorio destino.
    • FILE contendrá la ruta a la copia de seguridad del día.
    • Uso rutas completas para los comandos porque mi cuenta root no las tiene incluidas en el entorno por defecto. Esto es un riesgo de seguridad potencial. Idealmente querría compilar DAR como root y guardar los binarios donde los cree para que nadie pueda tocarlos. Y también ejecutar Tripwire sobre ellos.
    • DAR genera estadísticas tras cada ejecución. No las queremos en nuestro cron porque generarían emails innecesarios. Sólo stdout (la salida estándar) es redireccionada a /dev/null. Los errores serán mostrados y un email enviado si algo va mal.
    • Los últimos dos comandos find son opcionales. Los uso para cambiar el propietario a un usuario normal, quien creará posteriormente las copias de seguridad. De nuevo, otro riesgo de seguridad. El usuario root debería hacer copias de seguridad como root, y los usuarios deberían realizar sus propias copias. Pero en un sistema monousuario me da igual. Si algún intruso es lo suficientemente bueno para atravesar el cortafuegos y las palabras claves de mis cuentas de usuarios para poder leer las copias de seguridad, ya la he fastidiado.

    Ahora ponga el siguiente script casi idéntico para copias de seguridad diferenciales en /root/dar_diff.sh:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_diff
    PREV=`/bin/ls $DIR/*.dar|/usr/bin/tail -n 1`
    /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp -P mnt \
       -P dev/pts -P proc -P floppy -P burner -P cdrom \
       -P var/backups -A ${PREV%%.*} > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Los únicos dos cambios son la adición del parámetro -A y la generación de la variable PREV con una complicada línea de comando. Veamos qué es lo que hace esta línea de comando:

    • Primero el comando ls crea un listado de los ficheros con la extensión .dar en el directorio de copias de seguridad. La salida se pasa por una tubería al siguiente comando.
    • Por defecto ls muestra los ficheros en orden alfabético. Usamos tail para obtener el último fichero con el parámetro -n 1, el cual hace que sólo se muestre la última línea.
    • DAR quiere operar siempre con nombres de fichero sin número de trozo o extensión. Esto significa que si no nos deshacemos de éstas, DAR detendrá la operación para realizar una pregunta interactiva al usuario, fastidiando toda la automatización. Separamos el nombre completo del fichero con una característica de Bash llamada expansión de parámetros. Hay varios tipos de expansiones posibles, puede teclear man bash para verlas todas. Aquella que usa %% eliminará el patrón final más largo que coincida con lo que va tras %%. El resultado es el nombre base que queremos pasar a DAR.

    Ahora sólo tenemos que poner estos dos scripts bajo cron. Esto es lo que tenemos que teclear tras crontab -e:

    15 0 2-31 * * ./dar_diff.sh
    15 0 1    * * ./dar_backup.sh
    

    Puede informarse sobre la sintaxis con man -S 5 crontab. En pocas palabras, estas dos líneas le dicen a cron que ejecute los scripts 15 minutos tras medianoche. dar_backup.sh se ejecutará sólo el primer día del mes. El otro script se ejecutará el resto de los días.

    Aquí están los scripts de copia de seguridad para sus usuarios. Son iguales, cambiando únicamente los parámetros del comando DAR y algunas rutas:

    #!/bin/bash
    # dar_backup.sh
    
    DIR=/var/backups/gradha
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    /usr/local/bin/dar -m 256 -y -s 600M -D -R /home/gradha -c $FILE \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chmod 400 \{\} \;
    
    #!/bin/bash
    # dar_diff.sh
    
    DIR=/var/backups/gradha
    FILE=${DIR}/`/bin/date -I`_diff
    PREV=`/bin/ls $DIR/*.dar|/usr/bin/tail -n 1`
    /usr/local/bin/dar -m 256 -y -s 600M -D -R /home/gradha -c $FILE \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.zip" \
       -P instalacion_manual -P Mail/mail_pa_leer \
       -A ${PREV%%.*} > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chmod 400 \{\} \;
    

    No olvide añadir las entradas crontab requeridas por su usuario apuntando a la ruta adecuada.

    Recuperando su copia de seguridad desde cero

    Cuando llegue el momento de recuperar su copia de seguridad, dependiendo de lo que haya guardado tendrá una copia de seguridad completa del mes más copias de seguridad diferenciales hasta la última vez que las pudo realizar. El proceso de recuperación es muy simple, es el mismo descrito en el primer capítulo (Uso simple de DAR), sólo que debe hacerlo primero con la copia de seguridad completa, y entonces con las copias de seguridad diferenciales. Esto puede ser muy aburrido, así que aquí tiene otro script que puede guardar junto con sus ficheros de copia de seguridad:

    #!/bin/bash
    
    if [ -n "$3" ]; then
       CMD="$1"
       INPUT="$2_data"
       FS_ROOT="$3"
       $CMD -x "$INPUT" -w -R "$FS_ROOT"
       for file in ${INPUT:0:8}*_diff*; do
          $CMD -x "${file:0:15}" -w -R "$FS_ROOT"
       done
       echo "All done."
    else
       echo "Not enough parameters.
    
    Usage: script dar_location base_full_backup directory
    
    Where dar_location is a path to a working dar binary, base_full_backup
    is a date in the format 'YYYY-MM-DD', and directory is the place where
    you want to put the restored data, usually '/' when run as root."
    fi
    

    Este script es auto explicativo. La única cosa por la que debe preocuparse es el parámetro -w, que le dice a DAR que sobreescriba los ficheros que encuentre. Esto es necesario para copias de seguridad diferenciales. Oh, y ponga el script en el mismo directorio que sus ficheros de copia de seguridad. Aquí tiene un ejemplo de uso:

    ./recover.sh /usr/local/bin/dar 2003-10-01 /tmp/temp_path/
    

    Pruebe ejecutar eso como un usuario normal con algunos ficheros de copias de seguridad. Puede poner el resultado en un directorio temporal, así que lo bueno es que no necesita borrar su disco duro para probarlo.

    Añadiendo verificaciones a los scripts

    Denis Corbin sugiere que los scripts que crean las copias de seguridad podrían verificar el código de salida del comando DAR. Para el propósito de estos scripts tan simples esto no es crítico porque el propio DAR abortará la operación con un mensaje de error, y cron informará de cualquier salida de error por email (algo que no ocurre si todo va bien).

    No obstante, verificar el código de salida puede ser útil si está probando los scripts de forma interactiva y quiere saber qué comandos están siendo ejecutados:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    if /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
          -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
          -P mnt -P dev/pts -P proc -P floppy -P burner \
          -P cdrom -P var/backups > /dev/null ; then
       if /usr/local/bin/dar -t $FILE > /dev/null ; then
          echo "Archive created and successfully tested."
       else
          echo "Archive created but test FAILED."
       fi
    else
       echo "Archive creating FAILED."
    fi
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Puede probar esta versión fácilmente ejecutando el script y matando el proceso DAR desde otra terminal o consola con killall dar. Esto forzará la terminación del proceso DAR y verá que una de las ramas de error es alcanzada en el script.

    Otro posible uso de la verificación del código de retorno del comando sería borrar archivos incompletos del disco duro si algo falla, ejecutar comandos externos adicionales si algo falla, o evitar verificar el archivo creado cuando sabe que el primer comando falló. Esto último se puede hacer fácilmente concatenando los comandos de creación y verificación con && en una sola línea. Esto le dice a la shell que ejecute ambos comandos como una secuencia para evitar ejecutar el segundo si el primero falla.

    No obstante, si falla la corriente eléctrica durante una copia de seguridad, esta versión del script todavía dejaría a medio escribir archivos inválidos. Para prevenir esto podría mejorar el script para realizar una verificación positiva. Esto significa crear el fichero de copia de seguridad en un directorio temporal junto con un fichero *.valid si se alcanza la rama adecuada del script con éxito.

    Continuando esta estrategia, otro script cron monitorizando el directorio donde se crean los ficheros temporales de copias de seguridad movería al directorio final aquellos archivos con un fichero *.valid correspondiente, borrando todos los demás cuya última fecha de modificación fuese mayor que una hora.

    Ideas para el futuro

    No voy a implementar estas pronto, porque soy muy vago, pero si usted es uno de esos hackers hiper activos, aquí tiene algunas cosas que estaría bien tener:

    • Unificar tanto el script principal como el diferencial en uno, por lo que si el script se ejecuta y no hay fichero de copia de seguridad principal para el mes actual, será creado, y de lo contrario se creará uno diferencia. Útil si su máquina está apagada por alguna razón durante el día del mes que realiza la copia de seguridad no diferencial.

    • Mejorar los scripts para generar una imagen CDROM diaria con cdrecord y grabarla automáticamente en un disco regrabable colocado en su máquina. Por lo que si su disco duro entero resulta dañado, todavía tiene la última copia de seguridad en un otro medio de almacenamiento. Por supuesto, esto es limitado y no puede ser automático si su copia de seguridad necesita más de un CDROM. Haga lo mismo para ZIP/JAZZ/loquesea.

    • Integrar las copias de seguridad generadas con una mini distribución Knoppix autoarrancable. O cualquier otra distribución basada en disquetes que puede arrancar desde CDROM. Así tendría un CDROM de rescate con las herramientas para formatear su disco duro, y justo al lado una copia de seguridad fresca con la cual restablecer su máquina a un estado funcional.

    • Sincronización de los directorios con copias de seguridad a través de Internet con máquinas remotas. Así, si su máquina acaba quemándose físicamente junto con su casa, todavía tiene copias de seguridad seguras en alguna otra parte. Podría hacerse de forma sencilla con programas como rsync funcionando por ssh como tarea del cron.

    • Extraer parámetros comunes en un fichero separado e incluirlo en sus scripts usando el parámetro -B de DAR. Por ejemplo:

      $ cat > /var/backups/system/common.dcf
      -m 256 -y -s 600M -D -R / -Z "*.gz" -Z "*.bz2" -Z "*.zip" \
      -Z "*.png" -P home/gradha -P tmp -P mnt -P dev/pts \
      -P proc -P floppy -P burner -P cdrom -P var/backups
      

      Más tarde puede usar esto en el script:

      DIR=/var/backups/system
      FILE=${DIR}/`/bin/date -I`_data
      # Commands
      /usr/local/bin/dar -B ${DIR}/common.dcf -c $FILE > /dev/null
      /usr/local/bin/dar -t $FILE > /dev/null
      /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
      

      ¡Que también puede reusar en la versión diferencial!

    De hecho, hay personas listas que han comenzado a hacer scripts de este estilo para sí mismas y no les asusta compartirlos. Para evitar engordar este mini-howto, voy a guardarlos tal y como son en mi página web: https://github.com/gradha/dar-differential-backup-mini-howto/tree/master/contrib.

    Sientase libre de enviarme sus propias mejoras y las añadiré al directorio. Ya sea un fichero único o un .tar.gz con una suite de copias de seguridad completa, por favor añada un fichero simple .txt que pondré al lado del fichero. Por favor use inglés en su descripción, ¡y no olvide poner su nombre y dirección de correo para que la gente pueda enviarle correcciones o mejoras!

    El fin

    Y esa es toda la magia. Si tiene problemas, algo no está claro o es incorrecto (lo cual es peor), mándeme un email. Si encuentra este documento útil y quiere traducirlo, mándeme una traducción del fichero source.en.txt para que pueda distribuirla junto con esta versión y otros usuarios puedan encontrar fácilmente su versión traducida. Hablando de localizar, debería ser capaz de obtener el código fuente de este documento de mi página personal (enlace al comienzo del documento).
    ¡Disfrute!
    dar-2.7.17/doc/mini-howto/Makefile.in0000644000175000017520000004013014767510000014203 00000000000000# Makefile.in generated by automake 1.16.5 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2021 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = doc/mini-howto ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/gettext.m4 \ $(top_srcdir)/m4/host-cpu-c-abi.m4 $(top_srcdir)/m4/iconv.m4 \ $(top_srcdir)/m4/intlmacosx.m4 $(top_srcdir)/m4/lib-ld.m4 \ $(top_srcdir)/m4/lib-link.m4 $(top_srcdir)/m4/lib-prefix.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/nls.m4 \ $(top_srcdir)/m4/po.m4 $(top_srcdir)/m4/progtest.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(dist_pkgdata_DATA) \ $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(pkgdatadir)" DATA = $(dist_pkgdata_DATA) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) am__DIST_COMMON = $(srcdir)/Makefile.in README DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CXXSTDFLAGS = @CXXSTDFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DOXYGEN_PROG = @DOXYGEN_PROG@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GETTEXT_MACRO_VERSION = @GETTEXT_MACRO_VERSION@ GMSGFMT = @GMSGFMT@ GMSGFMT_015 = @GMSGFMT_015@ GPGME_CFLAGS = @GPGME_CFLAGS@ GPGME_CONFIG = @GPGME_CONFIG@ GPGME_LIBS = @GPGME_LIBS@ GPGRT_CONFIG = @GPGRT_CONFIG@ GREP = @GREP@ HAS_DOT = @HAS_DOT@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ INTLLIBS = @INTLLIBS@ INTL_MACOSX_LIBS = @INTL_MACOSX_LIBS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL_CFLAGS = @LIBCURL_CFLAGS@ LIBCURL_LIBS = @LIBCURL_LIBS@ LIBICONV = @LIBICONV@ LIBINTL = @LIBINTL@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTHREADAR_CFLAGS = @LIBTHREADAR_CFLAGS@ LIBTHREADAR_LIBS = @LIBTHREADAR_LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBICONV = @LTLIBICONV@ LTLIBINTL = @LTLIBINTL@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MSGFMT = @MSGFMT@ MSGMERGE = @MSGMERGE@ MSGMERGE_FOR_MSGFMT_OPTION = @MSGMERGE_FOR_MSGFMT_OPTION@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ POSUB = @POSUB@ PYEXT = @PYEXT@ PYFLAGS = @PYFLAGS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ UPX_PROG = @UPX_PROG@ USE_NLS = @USE_NLS@ VERSION = @VERSION@ XGETTEXT = @XGETTEXT@ XGETTEXT_015 = @XGETTEXT_015@ XGETTEXT_EXTRA_OPTIONS = @XGETTEXT_EXTRA_OPTIONS@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dot = @dot@ doxygen = @doxygen@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ groff = @groff@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ pkgconfigdir = @pkgconfigdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ tmp = @tmp@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ upx = @upx@ dist_pkgdata_DATA = dar-differential-backup-mini-howto.en.html dar-differential-backup-mini-howto.it.html dar-differential-backup-mini-howto.es.html index.html all: all-am .SUFFIXES: $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/mini-howto/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/mini-howto/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-dist_pkgdataDATA: $(dist_pkgdata_DATA) @$(NORMAL_INSTALL) @list='$(dist_pkgdata_DATA)'; test -n "$(pkgdatadir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(pkgdatadir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkgdatadir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pkgdatadir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(pkgdatadir)" || exit $$?; \ done uninstall-dist_pkgdataDATA: @$(NORMAL_UNINSTALL) @list='$(dist_pkgdata_DATA)'; test -n "$(pkgdatadir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgdatadir)'; $(am__uninstall_files_from_dir) tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(DATA) installdirs: for dir in "$(DESTDIR)$(pkgdatadir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dist_pkgdataDATA @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) install-data-hook install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-dist_pkgdataDATA @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) uninstall-hook .MAKE: install-am install-data-am install-strip uninstall-am .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am \ install-data-hook install-dist_pkgdataDATA install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am uninstall-dist_pkgdataDATA \ uninstall-hook .PRECIOUS: Makefile install-data-hook: $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/mini-howto for f in $(dist_pkgdata_DATA); do $(INSTALL) -m 0644 '$(srcdir)/'"$${f}" $(DESTDIR)$(pkgdatadir)/mini-howto; done uninstall-hook: rm -rf $(DESTDIR)$(pkgdatadir)/mini-howto # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: dar-2.7.17/doc/mini-howto/dar-differential-backup-mini-howto.it.html0000644000175000017520000013434214041360213022175 00000000000000 DAR differential backup mini-howto -IT-

    DAR differential backup mini-howto -IT-

    Author: Grzegorz Adam Hankiewicz
    Contact: dar@gradha.imap.cc
    Translator:David Gervasoni
    Contact: davidgerva@gmail.com
    Date: 2012-12-19
    Web site:http://gradha.github.com/dar-differential-backup-mini-howto/
    Copyright: This document has been placed in the public domain.
    Translations:From the web site you can get this document in English, Italian and Spanish.

    Introduzione

    "Chiunque dovrebbe fare le copie di backup dei suoi dati importanti". Questo avviso presente ovunque è generalmente ignorato da molta gente. Anche io l'ho ignorato, fino al giorno in cui ho perso una considerevole mole di dati. Non abbastanza contento ho fatto in modo di perderne ancora in una serie di successivi incidenti, per poi decidere che ne avevo abbastanza. Ho cercato quindi su Freshmeat qualche programma per la creazione di backup che supportasse anche la creazione di backup differenziali e ho trovato DAR.

    Fare un backup completo (o base) significa salvare tutti i files che ricadono sotto le cartelle interessate dalla politica di backup. Un backup differenziale o incrementale conterrà invece solo i files il cui contenuto è cambiato rispetto al precedente backup, fosse esso completo o differenziale.

    DAR permette di creare facilmente una serie di backup differenziali. Una soluzione che ho sviluppato esegue ogni notte dei backup automatici. Il primo giorno del mese viene fatto un backup completo. Il resto del mese vengono fatti solo backup differenziali. Per quanto mi riguarda i files che cambiano giornalmente non sono molti: il codice sorgente del progetto a cui sto lavorando e, più spesso, le e-mail.

    Così, quando mi serve, posso recuperare con facilità il contenuto che presentava il mio computer uno specifico giorno. DAR si presenta come un programma semplice ed essenziale eseguibile da linea di comando, ma si può rendere un po' più complicato con poche opzioni. Questo piccolo mini-howto vi illustrerà la mia specifica configurazione, molto grossolana, ma, nel mio caso, funzionale. Ho già sperimentato il recupero dei dati dalle copie di backup. Infatti verso la fine del 2003 mi sono trasferito in un altro paese e ho portato con me giusto un CD ROM e una Knoppix bootable e ho recuperato l'esatto stato della mia vecchia installazione Debian in poche ore. Senza modifiche, senza alcuna ulteriore installazione e senza perdere alcun file.

    Questo documento è stato scritto usando la versione 1.3.0 di DAR. Quando sono passato alla 2.0.3 tutto funzionava. Non ho nemmeno dovuto aggiornare i miei backup. Quindi sembra che l'interfaccia e i formati di backup siano stabili o al limite compatibili con le versioni precedenti. Comunque non prendete tutto ciò che dico (scrivo) quì come garantito. Verificate prima che la versione di DAR che avete installato funzioni come dovrebbe e potrete, in futuro, recuperare i files dai backup senza problemi.

    Per questa versione del testo ho usato reStructuredText (ecco spiegato il misterioso markup nella versione txt). Vedi http://docutils.sourceforge.net/ per maggiori informazioni.

    Utilizzo essenziale di DAR

    DAR è molto simile a tar nel numero di opzioni che ha: ce n'è una per ogni necessità, ma questo comporta una maggiore difficoltà iniziale per il nuovo utente. Come sempre, in qualsiasi momento, potete avere degli aiuti relativamente ai comandi disponibili scrivendo dar -h o man dar dopo che l'avete installato. Come nel programma tar, esistono una serie di opzioni obbligatorie che definiscono il tipo di operazione che intendete fare (creare, estrarre, listare etc) e un'ulteriore serie di opzioni che modificano la scelta prima effettuata. Giusto per esempio immaginate di voler fare un backup di una cartella della vostra directory /home. Dovrete scrivere qualcosa di simile a questo:

    dar -c backup_file_without_extension -g file1 -g file2 ... -g fileN
    

    L'output dovrebbe essere simile al seguente:

    $ dar -c my_backup_file -g safecopy.py/ -g translate_chars.py/
    
    
     --------------------------------------------
     15 inode(s) saved
     with 0 hard link(s) recorded
     0 inode(s) not saved (no file change)
     0 inode(s) failed to save (fileystem error)
     4 file(s) ignored (excluded by filters)
     0 file(s) recorded as deleted from reference backup
     --------------------------------------------
     Total number of file considered: 19
    $ ls
    mailbox_date_trimmer/  my_backup_file.1.dar  sdb.py/
    mailbox_reader/        safecopy.py/          translate_chars.py/
    

    Come avrete notato DAR aggiunge al nome del file un numero e un'estensione. Il motivo dell'estensione è chiaro, aiutare a capire che il file è un backup fatto con DAR. Il numero è chiamato slice ed è connesso alla possibilità di DAR di dividere il file di backup in base a grandezze specificate, in modo da poterle memorizzare su diversi supporti. Se per esempio voleste avere i backup su CD ROM, ma i backup delle vostre directory sono più grandi della capacità del CD ROM, potete chiedere a DAR di dividere l'archivio in tanti files che potrete poi memorizzare su diverse unità.

    Volete recuperare questo backup? Scrivete semplicemente i seguenti comandi:

    $ mkdir temp
    $ cd temp
    $ dar -x ../my_backup_file
    file ownership will not be restored as dar is not run as root.
    to avoid this message use -O option [return = OK | esc = cancel]
    Continuing...
    
    
     --------------------------------------------
     15 file(s) restored
     0 file(s) not restored (not saved in archive)
     0 file(s) ignored (excluded by filters)
     0 file(s) less recent than the one on fileystem
     0 file(s) failed to restore (fileystem error)
     0 file(s) deleted
     --------------------------------------------
     Total number of file considered: 15
    $ ls
    safecopy.py/  translate_chars.py/
    

    La politica di backup

    Il primo passo per creare backup funzionali è determinare quali parti del vostro sistema necessitano di essere archiviate. Questo non sta a significare che non potete semplicemente fare un backup del vostro intero sistema, ma dividerlo in almeno due parti aiuterà molto DAR (o qualsiasi altro tool di backup) nel suo lavoro.

    Il sistema inplementato in casa mia conta di due hard disk. Il primo hard disk è diviso in una partizione da 3.8 GB, dove risiede il mio intero sistema, e un'altra partizione da 11 GB dove sono memorizzati tutta la mia musica e altri file temporanei, ad esempio alcuni pacchetti Debian fatti da me. Il secondo hard disk ha una partizione da 9.4 GB e il suo unico scopo è di servire come backup del disco primario. Non mi interessa fare il backup dei file musicali perchè ho tutti i cd originali e uno script per estrarre di nuovo le tracce e riconvertirle in ogg.

    Della partizione da 3.8 GB di cui voglio fare il backup generalmente sono liberi all'incirca 1.3 - 1.5 Gb. Ho diviso "logicamente" i 2.3 GB occupati in system e home directories (mentre scrivo, la mia home è di 588 MB). La ragione di questa divisione è che, come un normale utente, posso esclusivamente modificare il contenuto della mia home directory e alcuni file della partizione di cui non ho intenzione di fare il backup. Contemporaneamente il settore della partizione in cui risiede il sistema rimane abbastanza stabile e immutato perchè raramente (dis)installo software. Infatti anche nella mia home directory le sole cose che cambiano sono abitualmente la mia cartella Mail e progetti, dove metto documenti come questo e altri software che scrivo/modifico.

    La distinzione di base fra home directories e system può essere anche utile nella normale organizzazione. Se lavori per una università spesso tutte le macchine hanno la stessa configurazione di base, ma ogni macchina avrà i suoi dati memorizzati. Puoi fare un singolo system backup di una singola macchina e più home backup per ogni computer. Un'altra configurazione comune è l'esistenza di un server centrale che condivide le home directories in NFS. In questo modo dovete solo fare il backup del server. Se vi sono utenti con privilegi di alto livello permettete loro di fare il backup del sistema delle loro proprie macchine, il backup delle home lo possono ignorare visto che se ne occuperà il server.

    come configurare DAR. Potete usare le opzioni o i file di configurazione. Le opzioni sono utili quando non ne avete troppe da specificare. I file di configurazione sono invece meglio quando volete fare backup differenti, complessi, con inclusioni/esclusioni; inoltre potete usare commenti per documentare le opzioni specificate spiegando per esempio perchè includete/escludete questa o quella directory. Può essere utile ciò se tornate ad utilizzare il computer dopo molto tempo e volete sapere il perchè di ogni opzione.

    La mia configurazione fa partire il programma DAR con una script shell richiamato periodicamente da cron (Qualche script per automatizzare i processi), così non devo digitare ogni volta lunghe stringhe di comando. Questo breve documento vuole anche introdurre brevemente alla creazione di tali scripts. Se preferite utilizzare i file di configurazione leggete la documentazione allegata a DAR per sapere come e quale sintassi utilizzare.

    Eseguire backup di base (full backup) con DAR

    Ecco qua sotto l'intera linea di comando che, da root, devo utilizzare per il backup del mio sistema. Non dovete preoccuparvi vedendo il gran numero di opzioni inserite, successivamente descriverò il motivo di ognuna di esse:

    dar -m 256 -y -s 600M -D -R / -c `date -I`_data -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner -P cdrom
    
    • -m 256

      DAR può comprimere i backup. La compressione è applicata a ogni file e può essere anche inutile per file di ridotte dimensioni. Di default, file di 100 bytes o meno non vengono compressi. Con l'opzione -m si porta questo limite a 256, cosa che sembra funzionare meglio per tutti quei piccoli file di configurazione che stanno sotto /etc/ e /home. Come potete notare questa è un'opzione assolutamente facoltativa, quasi un "capriccio".

    • -y [level]

      Questa opzione attiva la compressione Bzip2 che di default non è attiva. Potete anche specificare un livello di compressione tramite un numero che può andare da 0 (nessuna compressione, processo veloce) a 9 (miglior compressione, processo lento). Bzip2 di default usa il livello 6 che è il rapporto migliore velocità/compressione per la maggior parte dei file. Personalmente non specifico il livello di compressione, 6 mi va più che bene.

    • -s 600M

      Ecco quà l'opzione di DAR che vi permette di definire la dimensione dei file di backup o, meglio, delle slice. La grandezza specificata, in questo caso di 600 MB, sarà il massimo spazio occupato dai file creati. Se il vostro backup è più grande, ritroverete differenti file di backup con un numero di progressione inserito appena prima dell'estensione, cosìcchè potrete salvare ogni file su differenti supporti (floppies, zip, CDROM, etc). I miei backup sono molto più piccoli di questa dimensione e mantengo questa opzione giusto per tranquillità, nel caso i file diventassero più grandi. Se pensate che questa opzione possa esservi utile potete leggere il manuale di dar per saperne di più.

    • -D

      Memorizza il nome e il percorso delle directory escluse dall'opzione -P o che non ci sono fra quelle specificate alla linea di comando. Questa è un'opzione utile quando state recuperando un backup dal nulla; in questo modo non dovete creare manualmente tutte le directory escluse.

    • -R /

      Specifica la directory di root (directory radice) in cui salvare o dalla quale 'leggere' i file interessati dal backup. Di default questa è la directory in cui si sta lavorando (./). Se stiamo facendo un backup di sistema dalla cartella x, ecco che questa sarà la directory di root.

    • -c `date -I`_data

      Questa è l'opzione obbligatoria di cui vi ho parlato prima e definisce la creazione del backup. Per chi non capisce ciò che segue `date -I` è un trucchetto della shell. Brevemente, date -I restituisce una data con formato YYYY-MM-DD. L'output del comando fra gli apici singoli sarà usato come input dell'opzione -c. In questo modo potete creare backup con la data di creazione direttamente nel nome del file. Se ancora non capite di cosa sto parlando, provate la seguente istruzione dalla linea di comando:

      echo "La data di oggi è `date -I`"
      
    • -Z file_pattern

      Usando come argomento normali estensioni di file potete decidere quali file volete memorizzare nel vostro backup senza che siano compressi. Questo ha senso solo se usate anche l'opzione -y. Comprimendo file compressi otterrete al massimo file più grandi, nonchè spreco di risorse e occupazione della CPU.

    • -P relative_path

      Con questa opzione dite a DAR quali directory non volete memorizzare nel vostro backup. Quì potreste mettere ad esempio la /home (Sono l'unico utilizzatore di questa macchina, ce ne sono pochi altri, ma solo per testare alcune funzioni), directory di sistema che non sono realmente dei file, come proc, altri file che potreste aver montati sotto mnt (come, ovviamente, il drive in cui metterete i file di backup) etc, etc. Notate che i percorsi che inserite devono essere relativi a quello specificato con l'opzione -R.

    Tutto ciò non è poi così difficile. Controllate le pagine di manuale di DAR per maggiori informazioni sulle opzioni che vi interessa usare. Ed ecco quì il comando che uso all'interno della mia home:

    dar -m 256 -y -s 600M -D -R /home/gradha -c `date -I`_data \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer
    

    Nulla di nuovo sotto il sole. Come potete vedere molti dei comandi sono identici a quelli 'di cui sopra', ho solo cambiato il nome delle directories che voglio escludere utilizzando l'opzione -P e la directory radice con l'opzione -R.

    Eseguire backup differenziali con DAR

    Una volta che avete creato un backup base, potete creare quelli differenziali. Il primo backup differenziale deve essere creato usando quello di base come riferimento. I backup differenziali successivi useranno come riferimento l'ultimo backup differenziale disponibile. Ecco quì il comando per un backup differenziale del sistema:

    dar -m 256 -y -s 600M -D -R / -c `date -I`_diff -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner -P cdrom \
       -A previous_backup
    
    • -c `date -I`_diff

      Ho solo cambiato il nome del file, per un motivo... "pratico".

    • -A previous_backup

      Questa nuova opzione viene usata per dire a DAR dove trova il file di backup precedente in modo da creare un backup differenziale invece di uno base. L'unica cosa alla quale fare attenzione è che voi non dovete specificare nè il numero progressivo nè l'estensione, diversamente DAR porrebbe una richiesta alla linea di comando.

    La linea di comando dell'utente è esattamente la stessa. Ecco quà per completezza:

    dar -m 256 -y -s 600M -D -R /home/gradha -c `date -I`_diff \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer -A previous_backup
    

    DAR ha un'altra interessante caratteristica che quì non usiamo: i cataloghi. Quando create un backup con DAR questo contiene i dati e un catalogo. Questo catalogo contiene informazioni inerenti i file che sono stati salvati: la loro data, la loro dimensione dopo la compressione, etc. Potete estrarre il catalogo e memorizzarlo separatamente. Perchè dovreste farlo? Per implementare backup differenziali in rete, ad esempio.

    Al fine di creare un backup differenziale dovete procurare a DAR il backup precedente in modo che il programma possa decidere quali file sono stati modificati e quali no. Facendo questo lavoro su di una rete ciò può occupare molta banda. Invece, dopo aver creato il backup, potete estrarre il catalogo e inviarlo alla macchina designata alla creazione dei backup. Successivamente potete usare questo file con l'opzione -A, in questo modo DAR lavorerà come se il file del backup base fosse quello.

    Questo può essere anche utile se usate le slices perchè il catalogo è creato per la prima e l'ultima slice. E' più semplice passare al comando un singolo file piuttosto che dover utilizzare tutti i dischi del vostro precedente backup.

    Qualche script per automatizzare i processi

    Come ho detto prima è venuto il momento di mettere la nostra procedura di backup sotto cron. Mettendo il seguente script eseguibile per il backup del sistema sotto /root/dar_backup.sh:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
       -P mnt -P dev/pts -P proc -P floppy -P burner \
       -P cdrom -P var/backups > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Alcune cose da notare:

    • DIR è la variabile che rappresenta la directory di destinazione.
    • FILE rappresenta il percorso del file di backup di oggi.
    • Uso percorsi assoluti nei comandi perchè il mio account di root non li ha tutti inclusi nell'ambiente di default. Questo è potenzialmente un rischio in ambito di sicurezza. Idealmente dovreste compilare DAR come root e mantenere i binari dove li avete creati, così nessuno potrà toccarli o eseguirvi Tripwire.
    • DAR genera statistiche dopo ogni esecuzione. A noi non servono se eseguite in cron perchè produrrebbero solo mail inutili. Lo stdout è rediretto a /dev/null. Gli errori saranno invece riportati in una mail nel caso qualcosa andasse storto.
    • Gli ultimi due comandi find sono opzionali. Li uso per cambiare i permessi dei file per un normale utente che creerà successivamente i backup. Un ulteriore rischio in fatto di sicurezza. Root dovrebbe eseguire il backup dei file da root e gli utenti i loro. Ma con un sistema mono-user questo non è importante. Se un ipotetico intruso è capace di passare attraverso il mio firewall, inserire la mia password e quindi guardare tutti i miei backup: sono fregato.

    Ora ponete il seguente script per i backup differenziali, quasi identico al precedente, sotto /root/dar_diff.sh:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_diff
    PREV=`/bin/ls $DIR/*.dar|/usr/bin/tail -n 1`
    /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
       -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp -P mnt \
       -P dev/pts -P proc -P floppy -P burner -P cdrom \
       -P var/backups -A ${PREV%%.*} > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Gli unici due cambiamenti sono le aggiunte dell'opzione -A e la generazione della variabile PREV con una linea di comando un po' complicata. Vediamo cosa fa questa linea di comando:

    • Prima di tutto, il comando ls crea una lista dei file con estensione .dar presenti nella directory di backup; euesto output è rediretto al comando successivo.
    • Di default ls elenca i file in ordine alfabetico. tail è usato per ottenere l'ultimo file con l'opzione -n 1 che ordina di mostrare solo l'ultima riga.
    • DAR necessita di lavorare con filenames senza il numero di slice e senza estensione. Se non correggiamo noi il nome del file, DAR fermerà il processo e chiederà all'utente se effettuare l'operazione in modo automatico o meno. Separiamo quindi il nome del file con una feature Bash, chiamata parametro d'espansione. Ci sono diverse possibili espansioni, potete digitare man bash per vederle tutte. Usando %% rimuoviamo la più lunga "coda" di caratteri che si trova dopo il %%. Il risultato è il nome base che vogliamo passare a DAR.

    Ora dobbiamo solo mettere questi due script sotto il controllo di cron. Questo è ciò che dobbiamo scrivere dopo il comando crontab -e:

    15 0 2-31 * * ./dar_diff.sh
    15 0 1    * * ./dar_backup.sh
    

    Controllate in man -S 5 crontab la sintassi del comando. In breve queste due linee dicono a cron di far partire i processi 15 minuti dopo la mezzanotte. dar_backup.sh verrà eseguito solo il primo giorno del mese. L'altro script verrà eseguito tutti gli altri giorni.

    Ecco quì gli scripts di backup per i vostri utenti. Essi sono identici, cambiano solo alcune opzioni di DAR e i percorsi:

    #!/bin/bash
    # dar_backup.sh
    
    DIR=/var/backups/gradha
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    /usr/local/bin/dar -m 256 -y -s 600M -D -R /home/gradha -c $FILE \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.png" \
       -P instalacion_manual -P Mail/mail_pa_leer > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chmod 400 \{\} \;
    
    #!/bin/bash
    # dar_diff.sh
    
    DIR=/var/backups/gradha
    FILE=${DIR}/`/bin/date -I`_diff
    PREV=`/bin/ls $DIR/*.dar|/usr/bin/tail -n 1`
    /usr/local/bin/dar -m 256 -y -s 600M -D -R /home/gradha -c $FILE \
       -Z "*.gz" -Z "*.bz2" -Z "*.zip" -Z "*.zip" \
       -P instalacion_manual -P Mail/mail_pa_leer \
       -A ${PREV%%.*} > /dev/null
    /usr/local/bin/dar -t $FILE > /dev/null
    /usr/bin/find $DIR -type f -exec chmod 400 \{\} \;
    

    Non dimenticate di aggiungere a crontab le stringhe richieste per i votri utenti.

    Estrarre i backup su macchine vuote

    Venuto il momento di recuperare i vostri backup, in base a quello che avete salvato, avrete il backup completo del mese e tanti backup differenziali quanti quelli che avete fatto. Il processo di recupero dei dati è molto semplice: è uguale a quello descritto nel primo paragrafo (Utilizzo essenziale di DAR), l'importante è che prima recuperiate il backup base e solo successivamente quelli differenziali. Questo può essere noioso, così ecco quà un'altro script che potete salvare fra i vostri file di backup:

    #!/bin/bash
    
    if [ -n "$3" ]; then
       CMD="$1"
       INPUT="$2_data"
       FS_ROOT="$3"
       $CMD -x "$INPUT" -w -R "$FS_ROOT"
       for file in ${INPUT:0:8}*_diff*; do
          $CMD -x "${file:0:15}" -w -R "$FS_ROOT"
       done
       echo "All done."
    else
       echo "Not enough parameters.
    
    Usa: script dar_location base_full_backup directory
    
    Dove dar_location è un percorso alla directory con i binari di dar,
    base_full_backup è una data in formato 'YYYY-MM-DD' e directory è
    il posto dove volete mettere i file recuperati, solitamente '/'
    quando eseguito come root."
    fi
    

    Lo script si spiega da solo. L'unica cosa alla quale dovete fare attenzione è l'opzione -w che dice a DAR di sovrascrivere i file trovati. Questo è obbligatorio per i backup differenziali. Ricordate di mettere lo script nella stessa directory dove mettete i file di backup. Ecco un'utilizzo di esempio:

    ./recover.sh /usr/local/bin/dar 2003-10-01 /tmp/temp_path/
    

    Provate ad utilizzare questo come utente normale con pochi file di backup. Potete mettere i file recuperati in una directory temporanea, così non dovete svuotare il vostro hard disk per provarlo.

    Aggiungere dei controlli allo script di backup

    Denis Corbin suggerisce che lo script di creazione dei backup verifichi anche l'exit status dei comandi di DAR. Per quanto riguarda questo script così semplice, ciò non è di importanza critica perchè DAR stesso stamperebbe a schermo un messaggio d'errore e cron lo riporterebbe via mail (cosa che normalmente non succede se tutto va per il verso giusto)

    Comunque testare l'exit status può essere utile se state verificando il funzionamento dello script e volete sapere quali comandi sono eseguiti:

    #!/bin/bash
    
    DIR=/var/backups/system
    FILE=${DIR}/`/bin/date -I`_data
    # Commands
    if /usr/local/bin/dar -m 256 -y -s 600M -D -R / -c $FILE -Z "*.gz" \
          -Z "*.bz2" -Z "*.zip" -Z "*.png" -P home/gradha -P tmp \
          -P mnt -P dev/pts -P proc -P floppy -P burner \
          -P cdrom -P var/backups > /dev/null ; then
       if /usr/local/bin/dar -t $FILE > /dev/null ; then
          echo "Archive created and successfully tested."
       else
          echo "Archive created but test FAILED."
       fi
    else
       echo "Archive creating FAILED."
    fi
    /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
    /usr/bin/find $DIR -type f -exec chmod 440 \{\} \;
    

    Potete testare facilmente questa versione facendo partire lo script e killando i processi di DAR manualmente da un'altro terminale o un'altra console con killall dar, che forzerà la fine dei processi DAR e vedrete che uno dei rami di fallimento sarà raggiunto nello script di backup.

    Un'ulteriore possibile utilizzo per testare il codice può essere la rimozione di archivi incompleti dall'hard disk se qualcosa andasse male o evitare di testare l'archivio creato quando sapete che il primo comando è già fallito. Successivamente si possono facilmente concatenare i comandi di creazione e di test con && in una singola linea di testo. Ciò indica alla shell di eseguire entrambi i comandi in sequenza e impedisce l'esecuzione del secondo se il primo è fallito.

    una procedura di backup, questa versione dello script lascierà archivi errati vaganti. Per prevenire ciò potete fare in modo che lo script esegua una positive verification. Ciò creerà il backup in una directory temporanea insieme con un file *.valid.

    Così un'altro script monitora la directory dove i file temporanei sono messi e sposta in una directory definitiva i file con *.valid eliminando quelli la cui ultima modifica è precedente a un'ora.

    Idee per il futuro

    Non ho programmato di aggiornare questo testo presto perchè sono molto pigro, ma se voi siete fra quegli hackers imperattivi, ecco quà qualcosa che mi piacerebbe inserire:

    • Unificare gli script dei backup di base e differenziali in uno unico, cosicchè se all'esecuzione dello script non esistono backup base per il mese corrente questo venga reato. Utile per macchine che rimangono spente molto tempo dopo che il backup mensile è stato fatto.

    • Aggiornare lo script in modo che crei giornalmente un immagine per CD ROM con cdrecord e la masterizzi automaticamente su un cd riscrivibile presente nel drive. Così nel caso l'intero hard disk si guasti sarebbe disponibile l'ultimo backup su un media rimovibile. Certo la cosa è limitata e non può essere automatica nel caso i backup occupino più spazio di un CDROM. La stessa cosa vale per ZIP/JAZZ/qualsiasi cosa vogliate.

    • Integrazione dei backup generati con una mini Knoppix bootable o qualsiasi altra ditribuzione che possa essere avviata da CDROM. Così avreste un CDROM per recuperare i dati che può partire automaticamente e formattare il vostro hard disk.

    • Sincronizzazione delle directory di backup attraverso internet con hosts remoti. In questo modo se l'intera macchina è bruciata fisicamente, ad esempio con la vostra casa, voi avete i vostri backup in qualche altro posto. Potrebbe essere fatto facilmente con programmi come rsync attraverso ssh eseguiti tramite cron.

    • Inserimento dei parametri comuni in un file separato da inculdere dallo script utilizzando l'opzione di DAR, -B. Per esempio:

      $ cat > /var/backups/system/common.dcf
      -m 256 -y -s 600M -D -R / -Z "*.gz" -Z "*.bz2" -Z "*.zip" \
      -Z "*.png" -P home/gradha -P tmp -P mnt -P dev/pts \
      -P proc -P floppy -P burner -P cdrom -P var/backups
      

      Successivamente si può utilizzare questo nello script:

      DIR=/var/backups/system
      FILE=${DIR}/`/bin/date -I`_data
      # Commands
      /usr/local/bin/dar -B ${DIR}/common.dcf -c $FILE > /dev/null
      /usr/local/bin/dar -t $FILE > /dev/null
      /usr/bin/find $DIR -type f -exec chown .gradha \{\} \;
      

      Che può essere riutilizzato anche nella versione differenziale!

    In effetti, qualcuno ha già iniziato a creare qualche script a proprio uso e consumo e non ha problemi a condividerli. Per evitare di "disordinare" questo mini-howto ho intenzione di archiviarli come sono nel mio spazio web: https://github.com/gradha/dar-differential-backup-mini-howto/tree/master/contrib.

    Sentitevi liberi di inviare i vostri lavori e i vostri aggiornamenti e li aggiungerò alla directory. Se avete intenzione di inviare un singolo file di script o un .tar.gz con una intera suite di backup, inserite un semplice file .txt descrittivo che metterò assieme agli altri files, così la gente potrà leggere cosa sono e cosa fanno i files prima di scaricarli. Usate l'inglese nella vostra descrizione e non dimenticate di mettere nome e e-mail così la gente potrà inviarvi bugfixes o miglioramenti.

    The end

    And that's the whole magic. Se avete qualche problema, qualcosa non è chiaro o sbagliato (il che è peggio) inviatemi un'e-mail. Se trovi questo documento utile e lo vuoi tradurre inviami una traduzione del file source.en.txt così posso distribuirla assieme a questa versione e gli utenti troveranno più semplice la versione nella loro lingua. Dovreste raggiungere facilmente il codice di questo dodumento alla mia home page (link at the beginning of the document).
    Enjoy!

    Per finire

    Versione un po' corretta, con un italiano un po' più scorrevole. Ecco lo scopo che mi ero prefissato per questa revisione. Non so se l'obbiettivo è stato raggiunto, ma non immaginavo che tradurre dall'inglese fosse così difficile. Termini che initaliano non sai come rendere, plurali di nomi inglesi che in italiano rimangono singolari, modi di dire che, una volta tradotti letteralmente, non ti escono più dalla testa. Spero, comunque, che riusciate a capire in modo più agevole questa correzione rimanendo, come sempre, a disposizione. David (link at the beginning of the document)
    dar-2.7.17/doc/mini-howto/index.html0000644000175000017520000000216614403564520014145 00000000000000 Dar - Mini-Howto
    DAR's Documentation

    Mini Howto

    This Mini-Howto has been written by Grzegorz Adam Hankiewicz. For convenience you can find bellow a local copy of his work but can also fetch updated version from the official site.

    This Mini-Howto has been translated in several languages:

    dar-2.7.17/doc/mini-howto/README0000644000175000017520000000046014041360213013012 00000000000000 Mini Howto What you will find here is a mini howto compiled by Grzegorz Adam Hankiewicz. David Gervasoni has made a italian translation. An up to date version of these howto can be find on the web site as well as pdf formated versions. http://gradha.github.com/dar-differential-backup-mini-howto/ dar-2.7.17/doc/mini-howto/Makefile.am0000644000175000017520000000062514403564520014202 00000000000000dist_pkgdata_DATA = dar-differential-backup-mini-howto.en.html dar-differential-backup-mini-howto.it.html dar-differential-backup-mini-howto.es.html index.html install-data-hook: $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/mini-howto for f in $(dist_pkgdata_DATA); do $(INSTALL) -m 0644 '$(srcdir)/'"$${f}" $(DESTDIR)$(pkgdatadir)/mini-howto; done uninstall-hook: rm -rf $(DESTDIR)$(pkgdatadir)/mini-howto dar-2.7.17/doc/style.css0000644000175000017520000001630214403564520011725 00000000000000/* general pages style */ * { box-sizing border-box; } html { font-family: 'Open Sans', serif; text-align: justify; } body { margin: 5%; background-color: White; color: rgb(0, 0, 170); } p, dt { padding-left: 20px; } /* pagination */ div.top { border-radius: 14px 1px 14px 1px; background: rgb(221,221,221); color: rgb(0,0,170); text-align: center; padding: 10px; overflow: auto; display: flex; align-items: center; } div.top img { float:left; } @media only screen and (max-width: 800px) { div.top img { width: 150px; } } div.top h1, div.top div.h1 { flex-grow: 2; } /* menuitem */ div.menuitem { border-radius: 14px 1px 14px 1px; background: lightBlue; padding-left: 10px; padding-right: 10px; margin-bottom: 3px; text-overflow: ellipsis; overflow: hidden; } div.menuitem code { background: inherit; } div.menuitemactive { border-radius: 14px 1px 14px 1px; background: LightYellow; padding-left: 10px; padding-right: 10px; margin-bottom: 3px; text-overflow: ellipsis; overflow: hidden; } /* non floating menu box */ div.menutop { clear: both; } /* floating menu box */ div.page { display: flex; } /* page contains menuleft and main */ div.menuleft { padding: 1%; text-align: center; margin-right: 20px; background: rgb(221,221,221); flex-basis: 25%; flex-grow: 1; } div.menuleft a { font-style: normal; text-decoration: none; font-size: 1.25em; font-weight: bold; color: #000055; } div.menuleft div:hover { background: White; text-shadow: 3px 3px 2px Grey; } div.menuleft a:hover { text-shadow: 3px 3px 2px Grey; } /* main section */ div.mainleft { padding-bottom: 5%; max-width: 75%; } div.maintop { padding-bottom: 5%; min-width: 100%; } @media only screen and (max-width: 600px) { div.top { border-radius: 14px 1px 1px 1px; } div.page { flex-direction: column; align-items: stretch; } div.menuleft { margin-right: 0; } div.mainleft { width: 100%; max-width: 100%; } } /* bottom jump to top page */ div.jump { padding: 5px; text-align: center; margin-right: 5px; margin-bottom: 5px; background: inherit; float: left; clear: both; width: 88%; position: fixed; bottom: 0px; border-style: solid; border-width: 2px; border-color: Grey; box-shadow: 5px 5px grey; opacity: 0.1; } div.jump:hover { opacity: 1; } /* displaying code */ code { white-space: pre; color: #111111; background-color:#EEEEEE; overflow: auto; } code.block { margin-left: 2%; display: block; border: 1px solid #333333; text-align: left; } code:hover { background-color:inherit; } /* displaying gauge */ div.cadre { border-style: solid; border-color: #AAAAAA; border-width: 1px; border-radius: 14px 1px 14px 1px; background-color: #EEEEEE; padding-top: 2px; padding-right: 5px; padding-bottom:2px; padding-left: 2px; box-sizing: border-box; } div.gauge { border-radius: 14px 1px 14px 1px; box-shadow: 5px 5px Grey; color: #CCCCCC; text-shadow: 2px 2px 0px Black; padding-left: 1%; box-sizing: border-box; margin-top: 1px; margin-bottom: 2px; margin-left: 1px; margin-right: 1px; white-space: pre; } .best { background-color: DarkGreen; } .normal { background-color: Blue; } .ref { background-color: #555555; } /* code emphasis */ e { background-color: Yellow; } .yellow { background-color: Yellow; } .red { background-color: Red; } .blue { background-color: LightBlue; } .green { background-color: Teal; } /* hyperlinks appearance */ a { font-style: italic } a:link { color: #006677; } a:visited { color: #000055; } a:hover { color: Purple; text-shadow: 0px 0px 4px Red; } a:active { color: #ff0000; } /* title appearance */ h1,h2,h3,h4,h5,h6,h7,h8 { text-shadow: 3px 3px 2px Grey; color: DarkBlue; } /* numbering the titles */ body { counter-reset: h2c; } h2::before { counter-increment: h2c; content: counter(h2c) "\0000a0-\0000a0\0000a0"; } h2 { counter-reset: h3c; text-decoration: underline; } h3::before { counter-increment: h3c; content: counter(h2c) "." counter(h3c) "\0000a0-\0000a0\0000a0"; } h3 { counter-reset: h4c; } h4::before { counter-increment: h4c; content: counter(h2c) "." counter(h3c) "." counter(h4c) "\0000a0-\0000a0\0000a0"; } /* table view */ div.table { overflow-x: auto; } table, td { border: 1px solid; border-collapse: collapse; } table { background-color: #EEEEEE; margin-left: auto; margin-right: auto; } th { background-color: DarkBlue; text-shadow: 0px 0px 2px LightBlue; color: White; } .left { text-align: left; } .right { text-align: right; } .center { text-align: center; } table.lean { border: 0px; background-color: inherit; max-width: 100%; } table.lean td, th, tr { border: 0px; } /* description lists */ dt { font-weight: bold; text-decoration: underline; } td.desc { vertical-align: top; text-align: right; padding-right: 25px; } dt::before { content: '- '; } dt.void::before { content: ''; } dt.void::before { content: ' '; } /* synthesis */ .optimum { background-color: Green; color: White; text-shadow: 2px 2px 0px Black; } .ideal { background-color: Green; } .limited { background-color: Yellow; } .noway { background-color: Red; } /* tooltip */ .tooltip { position: relative; /* display: inline-block;*/ } .tooltip .text { visibility: hidden; max-width: 120px; background-color: DarkBlue; color: white; text-align:center; border-radius: 5px; padding: 5px 0; position: absolute; z-index: 1; box-shadow: 5px 5px Grey; } .tooltip:hover .text { visibility: visible; } /* color codes for Dev Cycle */ table.release { border: none; border-collapse: separate; min-width: 60%; } table.release td { border: none; border-collapse: separate; text-align: center; vertical-align: middle; } table.release td.tier { width: 33%; } .dc { width:8.3%; height: 30px; } .dc-dev1 { background-color: rgb(255,255,255); color: rgb(0,0,0); } .dc-dev2 { background-color: rgb(255,204,204); color: rgb(0,0,0); } .dc-dev3 { background-color: rgb(255,102,102); color: rgb(0,0,0); } .dc-dev4 { background-color: rgb(255,0,0); color: rgb(0,0,0); } .dc-dev5 { background-color: rgb(204,0,0); color: rgb(0,0,0); } .dc-frozen { background-color: rgb(255,204,0); color: rgb(0,0,0); } .dc-pre-release { background-color: rgb(255,255,51); color: rgb(0,0,0); } .dc-rel1 { background-color: rgb(153,255,153); color: rgb(0,0,0); } .dc-rel2 { background-color: rgb(51,255,51); color: rgb(0,0,0); } .dc-rel3 { background-color: rgb(51,204,0); color: rgb(0,0,0); } .dc-ending-soon { background-color: rgb(51,153,153); color: rgb(0,0,0); } .dc-ended { background-color: rgb(151,153,153); color: rgb(0,0,0); } dar-2.7.17/doc/index_libdar.html0000644000175000017520000000421714403564520013367 00000000000000 Dar's Documentation
    DAR's Documentation

    Main page for dar's documentation

    Libdar, the library that powers the dar command-line tool as well as several other softwares has its own documentation. All features available from dar are available thanks to libdar's API which is fully documented, including samples and tutorial.

    The API exists today for C++ and Python3 languages.

    • The API tutorial for C++ programming
    • Dar command-line source code for API usage illustration
    • The libdar's python binding tutorial based on examples of use. It follows steps of the C++ API Tutorial but using the equivalent Python classes.
    • The API reference documentation can be referred to by both C++ and Python developpers.
    • The libdar-api mailing-list if you want support on the API or Python binding usage.
    dar-2.7.17/doc/index_dar.html0000644000175000017520000000415614403564520012702 00000000000000 Dar's Documentation
    DAR's Documentation

    Main page for dar's documentation

    The following documentation focuses on the dar command-line tools:

    If after having read the previous documents, you still missing some information, you are welcome to ask for support

    Several external tools are based on libdar or dar and will provide some or all of the features presented in the documentation referred above. Thanks to refer to their respective documentation.

    dar-2.7.17/doc/restoration_dependencies.txt0000644000175000017520000000124714403564520015675 00000000000000 data : before metadata (dates, ownership, permissions, ea, fsa) dates : after data (atime/mtime), can be before ownership, permission ea and fsa as they only change ctime that cannot be restored anyway ownership : after data, if not root, but has CHOWN capability may lead to remove write access to the file permissions : if user not root, need to be done as late as possible (may remove w access) ea : after ownership do not drop the linux capabilities, but also before ownship and permission to be able to restore them libdar not run as root fsa : immutable flag need to be restored last, but other fsa before permission to be able to restore when libdar not run as root dar-2.7.17/doc/Good_Backup_Practice.html0000644000175000017520000004301414403564520014730 00000000000000 Good Backup Practice Short Guide
    Dar Documentation

    Good Backup Practice Short Guide

    Presentation

    This short guide is here to gather important (and somehow obvious) techniques about computer backups. It also explains the risks you take not following these principles. I thought this was obvious and well known by anyone, up to recently when I started getting feedback of people complaining about their lost data because of bad media or other reasons. To the question "have you tested your archive?", I was surprised to get the negative answers.

    This guide is not especially linked to Disk ARchive (aka dar) no more than to any other tool, thus, you can take advantage of reading this document if you are not sure of your backup procedure, whatever is the backup software you use.

    Notions

    In the following we will speak about backup and archive:

    • by backup, is meant a copy of some data that remains in place in an operational system
    • by archive, is meant a copy of data that is removed afterward from an operational system. It stays available but is no more used frequently.

    With the previous meaning of an archive you can also make a backup of an archive (for example a clone copy of your archive).

    Archives

    1. The first think to do just after making an archive is testing it on its definitive medium. There are several reasons that make this testing important:

      • any medium may have a surface error, which in some case cannot be detected at writing time.
      • the software you use may have bugs (also dar can, yes. ;-) ... ).
      • you may have done a wrong operation or missed an error message (no space left to write the whole archive ad so on), especially when using poorly written scripts.

      Of course the archive testing must be done when the backup has been put on its definitive place (CD-R, floppy, tape, etc.), if you have to move it (copy to another media), then you need to test it again on the new medium. The testing operation, must read/test all the data, not just list the archive contents (-t option instead of -l option for dar). And of course the archive must have a minimum mechanism to detect errors (dar has one without compression, and two when using compression).

    2. As a replacement for testing, a better operation is to compare the files in the archive with those on the original files on the disk (-d option for dar). This makes the same as testing archive readability and coherence, while also checking that the data is really identical whatever the corruption detection
      mechanisms used are. This operation is not suited for a set of data that changes (like a active system backup), but is probably what you need when creating an archive.

    3. Increasing the degree of security, the next thing to try is to restore the archive in a temporary place or better on another computer. This will let you check that from end to end, you have a good usable backup, on which you can rely. Once you have restored, you will need to compare the result, the diff command can help you here, moreover, this is a program that has no link with dar so it would be very improbable to have a common bug to both dar and diff that let you think both original and restored data are identical while they are not!

    4. Unfortunately, many (all) media do alter with time, and an archive that was properly written on a correct media may become unreadable with time and/or bad environment conditions. Thus of course, take care not to store magnetic storages near magnetic sources (like HiFi speakers) or enclosed in metallic boxes, as well as avoid having sun directly lighting your CD-R(W) DVD-R(W), etc. Also mentioned for many media is humidity: respect the acceptable humidity range for each medium (don't store your data in your bathroom, kitchen, cave, ...). Same thing about the temperature. More generally have a look at the safe environmental conditions described in the documentation, even just once for each media type.

      The problem with archive is that usually you need them for a long time, while the media has a limited lifetime. A solution is to make one (or several) copy (i.e.: backup of archive) of the data when the original support has arrived its half expected life.

      Another solution, is to use Parchive, it works in the principle of RAID disk systems, creating beside each file a par file which can be used later to recover missing part or corrupted part of the original file. Of course, Parchive can work on dar's slices. But, it requires more storage, thus you will have to choose smaller slice size to have place to put Parchive data on your CD-R or DVD-R for example. The amount of data generated by Parchive depends on the redundancy level (Parchive's -r option). Check the notes for more informations about using Parchive with dar. When using read-only medium, you will need to copy the corrupted file to a read-write medium for Parchive can repair it. Unfortunately the usual 'cp' command will stop when the first I/O error will be met, making you unable to get the sane data *after* the corruption. In most case you will not have enough sane data for Parchive to repair you file. For that reason the "dar_cp" tool has been created (it is available included in dar's package). It is a cp-like command that skips over the corruptions (replacing it by a field of zeored bytes, which can be repaired afterward by Parchive) and can copy sane data after the corrupted part.

    5. another problem arrives when an archive is often read. Depending on the medium, the fact to read, often degrades the media little by little, and makes the media's lifetime shorter. A possible solution is to have two copies, one for reading and one to keep as backup, copy which should be never read except for making a new copy. Chances are that the often read copy will "die" before the backup copy, you then could be able to make a new backup copy from the original backup copy, which in turn could become the new "often read" medium.

    6. Of course, if you want to have an often read archive and also want to keep it forever, you could combine the two of the previous techniques, making two copies, one for storage and one for backup. Once you have spent a certain time (medium half lifetime for example), you could make a new copy, and keep them beside the original backup copy in case of.

    7. Another problem, is safety of your data. In some case, the archive you have does not need to be kept a very long time nor it needs to be read often, but instead is very "precious". in that case a solution could be to make several copies that you could store in very different locations. This could prevent data lost in case of fire disaster, or other cataclysms.

    8. Yet another aspect is the privacy of your data. An archive may not have to be accessible to anyone. Several directions could be possible to answer this problem:

      • Physical restriction to the access of the archive (stored in a bank or locked place, for example)
      • Hid the archive (in your garden ;-) ) or hide the data among other data (Edgar Poe's hidden letter technique)
      • Encrypting your archive
      • And probably some other ways I am not aware about.

      For encryption, dar provides strong encryption inside the archive (blowfish, aes, etc.), it does preserve the direct access feature that avoid you having decrypt the whole the whole archive to restore just one file. But you can also use an external encryption mechanism, like GnuPG to encrypt slice by slice for example, the drawback is that you will have to decrypt each slice at a whole to be able to recover a single file in it.

    Backup

    Backups act a bit like an archive, except that they are a copy of a changing set of data, which is moreover expected to stay on the original location (the system). But, as an archive, it is a good practice to at least test the resulting backups, and once a year if possible to test the overall backup process by doing a restoration of your system into a new virtual machine or a spare computer, checking that the recovered system is fully operational.

    The fact that the data is changing introduces two problems:

    • A backup is quite never up to date, and you will probably loose data if you have to rely on it
    • A backup becomes soon obsolete.

    The backup has also the role of keeping a recent history of changes. For example, you may have deleted a precious data from your system. And it is quite possible that you notice this mistake long ago after deletion. In that case, an old backup stays useful, in spite of many more recent backups.

    In consequences, backup need to be done often for having a minimum delta in case of crash disk. But, having new backup do not mean that older can be removed. A usual way of doing that, is to have a set of media, over which you rotate the backups. The new backup is done over the oldest backup of the set. This way you keep a certain history of your system changes. It is your choice to decide how much archive you want to keep, and how often you will make a backup of your system.

    Differential / incremental backup

    A point that can increase the history while saving media space required by each backup is the differential backup. A differential backup is a backup done only of what have changed since a previous backup (the "backup of reference"). The drawback is that it is not autonomous and cannot be used alone to restore a full system. Thus there is no problem to keep the differential backup on the same medium as the one where is located the backup of reference.

    Doing a lot of consecutive differential backup (taking the last backup as reference for the next differential backup, which some are used to call "incremental" backups), will reduce your storage requirement, but will extra timecost at restoration in case of computer accident. You will have to restore the full backup (of reference), then you will have to restore all the many backup you have done up to the last. This implies that you must keep all the differential backups you have done since the backup of reference, if you wish to restore the exact state of the filesystem at the time of the last differential backup.

    It is thus up to you to decide how much differential backup you do, and how much often you make a full backup. A common scheme, is to make a full backup once a week and make differential backup each day of the week. The backup done in a week are kept together. You could then have ten sets of full+differential backups, and a new full backup would erase the oldest full backup as well as its associated differential backups, this way you keep a ten week history of backup with a backup every day, but this is just an example.

    An interesting protection suggested by George Foot on the dar-support mailing-list: once you make a new full backup, the idea is to make an additional differential backup based on the previous full backup (the one just older than the one we have just built), which would acts as a substitute for the actual full backup in case something does go wrong with it later on.

    Decremental Backup

    Based on a feature request for dar made by "Yuraukar" on dar-support mailing-list, the decremental backup provides an interesting approach where the disk requirement is optimized as for the incremental backup, while the latest backup is always a full backup (while this is the oldest that is full, in the incremental backup approach). The drawback here is that there is some extra work at each new backup creation to transform the former more recent backup from a full backup to a so called "decremental" backup.

    The decremental backup only contains the difference between the state of the current system and the state the system had at a more ancient date (the date of the full backup corresponding the decremental backup was made).

    In other words, the building of decremental backups is the following:

    • Each time (each day for example), a new full backup is made
    • The full backup is tested, parity control is eventually built, and so on.
    • From the previous full backup and the new full backup, a decremental backup is made
    • The decremental backup is tested, parity control is eventually built, an so on.
    • The oldest full backup can then be removed

    This way you always have a full backup as the lastest backup, and decremental backups as the older ones.

    You may still have several sets of backup (one for each week, for example, containing at the end of a week a full backup and 6 decremental backups), but you also may just keep one set (a full backup, and a lot of decremental backups), when you will need more space, you will just have to delete the oldest decremental backups, thing you cannot do with the incremental approach, where deleting the oldest backup, means deleting the full backup that all others following incremental backup are based upon.

    At the difference of the incremental backup approach, it is very easy to restore a whole system: just restore the latest backup (by opposition to restoring the more recent full backup, then the as many incremental backup that follow). If now you need to recover a file that has been erased by error, just use a the adequate decremental backup. And it is still possible to restore a whole system globally in a state it had long ago before the lastest backup was done: you will for that restore the full backup (latest backup), then in turn each decremental backup up to the one that correspond to the epoch of you wish. The probability that you have to use all decremental backup is thin compared to the probability you have to use all the incremental backups: there is effectively much more probability to restore a system in a recent state than to restore it in a very old state.

    There is however several drawbacks:

    time
    Doing each time a full backup is time consumming and creating a decremental backup from two full backups is even more time consuming...
    temporary disk space
    Each time you create a new backup, you temporarily need more space than using the incremental backup, you need to keep two full backups during a short period, plus a decremental backup (usually much smaller than a full backup), even if at then end you remove the oldest full backup.

    In conclusion, I would not tell that decremental backup is the panacea, however it exists and may be of interest to some of you. More information about dar's implementation of decremental backup can be found here.


    Any other trick/idea/improvement/correction/evidences are welcome!

    Denis.

    dar-2.7.17/doc/man/0000755000175000017520000000000014767510034010710 500000000000000dar-2.7.17/doc/man/Makefile.in0000644000175000017520000004153414767510000012675 00000000000000# Makefile.in generated by automake 1.16.5 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2021 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = doc/man ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/gettext.m4 \ $(top_srcdir)/m4/host-cpu-c-abi.m4 $(top_srcdir)/m4/iconv.m4 \ $(top_srcdir)/m4/intlmacosx.m4 $(top_srcdir)/m4/lib-ld.m4 \ $(top_srcdir)/m4/lib-link.m4 $(top_srcdir)/m4/lib-prefix.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/nls.m4 \ $(top_srcdir)/m4/po.m4 $(top_srcdir)/m4/progtest.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(dist_pkgdata_DATA) \ $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(pkgdatadir)" DATA = $(dist_pkgdata_DATA) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) am__DIST_COMMON = $(srcdir)/Makefile.in DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CXXSTDFLAGS = @CXXSTDFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DOXYGEN_PROG = @DOXYGEN_PROG@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GETTEXT_MACRO_VERSION = @GETTEXT_MACRO_VERSION@ GMSGFMT = @GMSGFMT@ GMSGFMT_015 = @GMSGFMT_015@ GPGME_CFLAGS = @GPGME_CFLAGS@ GPGME_CONFIG = @GPGME_CONFIG@ GPGME_LIBS = @GPGME_LIBS@ GPGRT_CONFIG = @GPGRT_CONFIG@ GREP = @GREP@ HAS_DOT = @HAS_DOT@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ INTLLIBS = @INTLLIBS@ INTL_MACOSX_LIBS = @INTL_MACOSX_LIBS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL_CFLAGS = @LIBCURL_CFLAGS@ LIBCURL_LIBS = @LIBCURL_LIBS@ LIBICONV = @LIBICONV@ LIBINTL = @LIBINTL@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTHREADAR_CFLAGS = @LIBTHREADAR_CFLAGS@ LIBTHREADAR_LIBS = @LIBTHREADAR_LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBICONV = @LTLIBICONV@ LTLIBINTL = @LTLIBINTL@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MSGFMT = @MSGFMT@ MSGMERGE = @MSGMERGE@ MSGMERGE_FOR_MSGFMT_OPTION = @MSGMERGE_FOR_MSGFMT_OPTION@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ POSUB = @POSUB@ PYEXT = @PYEXT@ PYFLAGS = @PYFLAGS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ UPX_PROG = @UPX_PROG@ USE_NLS = @USE_NLS@ VERSION = @VERSION@ XGETTEXT = @XGETTEXT@ XGETTEXT_015 = @XGETTEXT_015@ XGETTEXT_EXTRA_OPTIONS = @XGETTEXT_EXTRA_OPTIONS@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dot = @dot@ doxygen = @doxygen@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ groff = @groff@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ pkgconfigdir = @pkgconfigdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ tmp = @tmp@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ upx = @upx@ dist_pkgdata_DATA = index.html @USE_GROFF_TRUE@TARGET = copyman @USE_GROFF_TRUE@SUFFIXES = .html .1 all: all-am .SUFFIXES: .SUFFIXES: .html .1 $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/man/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/man/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-dist_pkgdataDATA: $(dist_pkgdata_DATA) @$(NORMAL_INSTALL) @list='$(dist_pkgdata_DATA)'; test -n "$(pkgdatadir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(pkgdatadir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkgdatadir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pkgdatadir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(pkgdatadir)" || exit $$?; \ done uninstall-dist_pkgdataDATA: @$(NORMAL_UNINSTALL) @list='$(dist_pkgdata_DATA)'; test -n "$(pkgdatadir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgdatadir)'; $(am__uninstall_files_from_dir) tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am @USE_GROFF_FALSE@all-local: all-am: Makefile $(DATA) all-local installdirs: for dir in "$(DESTDIR)$(pkgdatadir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." @USE_GROFF_FALSE@clean-local: @USE_GROFF_FALSE@install-data-hook: @USE_GROFF_FALSE@uninstall-hook: clean: clean-am clean-am: clean-generic clean-libtool clean-local mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dist_pkgdataDATA @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) install-data-hook install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-dist_pkgdataDATA @$(NORMAL_INSTALL) $(MAKE) $(AM_MAKEFLAGS) uninstall-hook .MAKE: install-am install-data-am install-strip uninstall-am .PHONY: all all-am all-local check check-am clean clean-generic \ clean-libtool clean-local cscopelist-am ctags-am distclean \ distclean-generic distclean-libtool distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-data-hook install-dist_pkgdataDATA \ install-dvi install-dvi-am install-exec install-exec-am \ install-html install-html-am install-info install-info-am \ install-man install-pdf install-pdf-am install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags-am uninstall uninstall-am \ uninstall-dist_pkgdataDATA uninstall-hook .PRECIOUS: Makefile @USE_GROFF_TRUE@copyman: @USE_GROFF_TRUE@ cp '$(srcdir)/../../man/'*1 . @USE_GROFF_TRUE@ $(MAKE) dar.html dar_slave.html dar_xform.html dar_manager.html dar_cp.html dar_split.html @USE_GROFF_TRUE@ touch copyman @USE_GROFF_TRUE@.1.html: @USE_GROFF_TRUE@ sed -e 's%\-%\\-%g' < ./$< | groff -man -Thtml | sed -e 's% $@ @USE_GROFF_TRUE@all-local: $(TARGET) @USE_GROFF_TRUE@clean-local: @USE_GROFF_TRUE@ rm -f $(TARGET) *.1 copyman dar.html dar_slave.html dar_xform.html dar_manager.html dar_cp.html dar_split.html @USE_GROFF_TRUE@install-data-hook: @USE_GROFF_TRUE@ $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/man @USE_GROFF_TRUE@ $(INSTALL) -m 0644 '$(srcdir)'/*.html $(DESTDIR)$(pkgdatadir)/man @USE_GROFF_TRUE@ $(INSTALL) -m 0644 '$(srcdir)/index.html' $(DESTDIR)$(pkgdatadir)/man @USE_GROFF_TRUE@uninstall-hook: @USE_GROFF_TRUE@ rm -rf $(DESTDIR)$(pkgdatadir)/man # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: dar-2.7.17/doc/man/index.html0000644000175000017520000000325414403564520012625 00000000000000 Dar - Dynamically Generated Documentation
    DAR's Documentation

    Dynamically Generated Documentation

    The following documentation has not been built manually. If you cannot access the following links while reading this page from a source package, this is maybe because you have not typed 'make' or because you lack some requirements like Groff and Doxygen. You can also access to Dar Documentation on dar's homepage or mirror page with all dynamcially generated documentation available.

    Man pages for:

    Libdar's API Documentation:

    dar-2.7.17/doc/man/Makefile.am0000644000175000017520000000137214403564520012663 00000000000000dist_pkgdata_DATA = index.html if USE_GROFF TARGET=copyman SUFFIXES = .html .1 copyman: cp '$(srcdir)/../../man/'*1 . $(MAKE) dar.html dar_slave.html dar_xform.html dar_manager.html dar_cp.html dar_split.html touch copyman .1.html: sed -e 's%\-%\\-%g' < ./$< | groff -man -Thtml | sed -e 's% $@ all-local: $(TARGET) clean-local: rm -f $(TARGET) *.1 copyman dar.html dar_slave.html dar_xform.html dar_manager.html dar_cp.html dar_split.html install-data-hook: $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/man $(INSTALL) -m 0644 '$(srcdir)'/*.html $(DESTDIR)$(pkgdatadir)/man $(INSTALL) -m 0644 '$(srcdir)/index.html' $(DESTDIR)$(pkgdatadir)/man uninstall-hook: rm -rf $(DESTDIR)$(pkgdatadir)/man endif dar-2.7.17/doc/portable_cp0000755000175000017520000000123214740171677012302 00000000000000#!/bin/sh if [ -z "$1" -o -z "$2" -o ! -z "$3" ] ; then echo "usage: $0 " exit 1 fi TMP_SRC=dar_install_cp_test_src TMP_DST=dar_install_cp_test_dst if mkdir "$TMP_DST" && mkdir "$TMP_SRC" ; then if cp -dR --preserve=mode $TMP_SRC $TMP_DST 1> /dev/null 2> /dev/null ; then rm -rf "$TMP_SRC" "$TMP_DST" exec cp -dR --preserve=mode "$1" "$2" else # BSD-like Unix that does not support -d or --preserve options rm -rf "$TMP_SRC" "$TMP_DST" exec cp -pR "$1" "$2" fi else rm -rf "$TMP_SRC" "$TMP_DST" echo "Impossible to create $TMP_DST or $TMP_SRC in order to determine capabilities of the 'cp' command" exit 2 fi dar-2.7.17/doc/index.html0000644000175000017520000000313514403564520012050 00000000000000 Dar's Documentation
    DAR's Documentation

    Main page for dar's documentation

    There is many ways to discover dar and libdar. The following links give different possible paths, to adapt anyone's expectation:

    dar-2.7.17/doc/index_internal.html0000644000175000017520000000307014740173721013745 00000000000000 Dar's Documentation
    DAR's Documentation

    Main page for dar's documentation

    This documentation gives information about libdar internals. If you want to know more about the way dar is implemented you should find here all what you need:

    If you have questions that could not find their answers in this document or have suggestions, you are welcome to drop your questions on the dar-support mailing-list.

    dar-2.7.17/doc/old_dar_key2.txt0000644000175000017520000000627714740171677013177 00000000000000-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.4.10 (GNU/Linux) mQINBFBRzaMBEADLGrs4IyWuwqlgvp+OyzoMMlGk2r+PHjZC74vB8CoCgx1pybX8 U4OH7+9xZJVuOJTBOvi4xFtfDLZQlsXYFUTvsUI7nUuJfmzk5OkN/GdRkIIIagXL orUMDXsI7M89hrPxMQTZLC8im0PRF3VBNibcvIA4XIFM+L37jQpfZsztshrL6QHu GISxQHoFV48ojl+K/hb5qDDq6mzb2H7TUHoQX4RStsNO0l7X++RwZ4C3feAzGGCN rM7Tm8+PgapAzycyKZ4lx/PlzyJKWS/Q+Cu6aLuqHcO7TbHRfibvBBIfyhdAlZQx ALEcFflc8BZpDVsIc6f5DQDj28zgGGY44Mz8ZgfOYrjW9fm2aSPcp/im3nBlqsal HP5FFbJhJpDfZMsnBPdaMMtIzR6R5iJAswYkraHRrkkhGX9OkV85dRkAset5e0V/ mtzQWdDNjoIK42vb0tDMUusjCOQODKLC4l8Xn7VGZLRAm6X+s5qug4fZy/YQEL3m SlRzeoTwM/Ri1SeLLx2WxtylXczJ1qEwHV7lQIZYg2iAvyP1bkO0rpNXEAUTGvCl LM8vXXMDXl4LtN6H9O/tE/jOtY/w3u8HVPnkX/2NflqkeQGM6gqsmOGBuoPvaeWY SPQIoe4vOGlkcnzNxEr24i7VKb4QrMM7pdvX8WPSgoBweU2SHP7HCqDPGQARAQAB tDxEZW5pcyBDb3JiaW4gKGh0dHA6Ly9kYXIubGludXguZnJlZS5mci8pIDxkYXIu bGludXhAZnJlZS5mcj6JAj4EEwECACgFAlBRzaMCGwMFCRH1qD0GCwkIBwMCBhUI AgkKCwQWAgMBAh4BAheAAAoJEAgxsL0D2LGC4ZEP/2+9kmhGSNLXsweDyJqhH4mY F2PsqoVw2fFbUyOCDtF69mGT5fy47OJ1U9e1TRpiMg6ojlrb1n0WaHtFT4byLQPa dbKLO22XiEv4EuWS5rPvZQNXW4hkcgA75ECnpJSChWRoHKHdx6HylP1X/3+lbVJ7 AFVXG39CUiaqriRcYgGJ28TE7zfxqaGZAsY0EPOBpj+7qXqqAS3tBDiFL7CJSnhN R6YF3nBvjqgnbG+/hR10UXgvfUZHjMelnsSlRE59J4jHYnSnpDFqyeHZvULDNCJx lBb9yW0351fY6MGtLGdCw58qGzqUiD/t6aXZdL0U90R1qUTrY1CteeBgmpwtFXY7 j3jOTXeySGj+/X6jlkJI+oauKJ1t4KHyldKznepqhblgdL3KegfAWgjd2MdCLiJe x5KCV3n9Q/GrsHMK0zRJH7NtI8UrPdcNLdif3HazeVcfQoeZON/yQ3FM5TGmOB88 xLgNFIbspT0peq3U/1hvgV/bR+hKYhdB/dxKo6A/Hlb4H9247prsd5/K7sjmz/T8 DFE6kAE+OB87eZIL7uwvIFhJuh6jGGJTLkQosoFHIkJ7b51rKlGXsfxl4KmXJKP+ 3qdzRBp1FzchQYbWMQA104GAfD6zjqoJL2hEAvNtuVb5pHuOj7FNn6oLzmroiC83 ItprEJz0gwUGYh+7TbcgiEoEEBECAAoFAlBRz2kDBQJ4AAoJEKQuQiPIGBpS5r0A niu5gBzPPyZ4hLsA2o+23bWmimcaAJ4oTlg737i9yM2ea888L3DCy8l6SLkCDQRQ Uc2jARAAwGv3DMUijz4AgoSFoi5mfLxa/Ilg6OogFdl8UzjkJ9fQ9aBFdwCqhrg+ m75DAGcsn7S+e62nX0W9lPY3sy2zNq98Hh1wMuHI1cKCw3ricdWxbxvpnMui2gPI 3vMP33kU55bokSvBIOZc0wBbZg4BFrEcz7JpIsK46lflxPYvtnjFIiA646mtW/xH 8JKNHpVefHfdYBgCvHUW0lL/wchdm5snfXCe7FWZRqTljSVo+RfykKygp1zOiBPl OT8ePuIb0wzF7f3K0OFH1K2wOdipykuUgmzvwXVXP4FaIxEXwtlKSQEu9ScjxcVo UCvYVNcjD3RBcmUeh2vWnJ0saSSGKnwjTKoEEQnKW53soxGTr5KueFv7Op1Rmyp9 Oli+sDpaJIKnk149FBlMVRvBe7OyFB5e8imvJNUbq4MDZ6N187FKsQK887Xv7oCY jCvpcJ376v0YVAtAedw5kMlQhwVsCRWtq87mBCvEbVga66n3wZ6SPVXJGfBSqBgk mp3Oid4U4noO7yvo3E2yU6ejLb8Vq4gff+2Aaivu2IwgnKn/7OiPLoUhUN9iyFgy TKjqgSU0W5qMxa29P7qVHZu7nrZ87QMRY83bcG/D0IJPBHGVUoVovoYozRBz2wVp xcoGXSUcV6mxxj274dkIFZ31hhQBgQOs+KN+JxHEZ8Q1KbCNewMAEQEAAYkCJQQY AQIADwUCUFHNowIbDAUJEfWoPQAKCRAIMbC9A9ixgqZCD/0bfAbIc3Fyq0q9e4Ch i7ayssNUzq70nvpHaIf5JTL9BTVeeb4mR4ALfR9ncwuorgMbQjkD34aiTULa+xCc sEU4giJC/voj0Dae2Zw3kXMR1jexQFWho4GbzYtmJ9QiwLQDMidACDsY75V0a1zc g+MXnwlcRuZj04v0MTdV+kFGF5qJ+H6JOaoILSAhOlury9XDyHP8D2fV+nMxplI9 nU4g49La62jP1ZJOMwitPBQ333FaSrdHa3DTrJh3UX8+A5aiOYRJM7C3FfJZERaS 0iSYf0Vcq0XTr3ORQ0qykJfwipA4ukA8zT+b7CkWDP0IsxMlSUZaDm4nCE498pt8 QhT/v/mU4WMD5PkB+8olJr9nEJYimakoJguxFiH2QPuQbEGHE8hfbHiLVfTVY9ak jTm2BHOJrFFJINCtcX2j8Dzf6SFuo+iPmwRraIQ9d83AruBCSesv8Po2Oe2//zV9 fW+WpbgJVxxgr58LvtrRUOK+HNbNgmPkgKSmq7weSXkDxr5jtVACLhKtN5WWZhnY p/qwGK4J60P/RJQxBrTI660becfGw+ngNG8uAeJ/R87+EDr5BD/nHqCHqcsl1WZE W+i2hbcYSp8BMl3x8Gsup4vWqcORZAhgssdHQW7aJ6msuGVsj5u3VZ9ZSO8/gXLn nQ/zwsG46MiZyLiMbligccsRhw== =49T8 -----END PGP PUBLIC KEY BLOCK----- dar-2.7.17/doc/from_sources.html0000644000175000017520000006654314767475172013504 00000000000000 Dar's Documentation - Building dar and libdar
    Dar Documentation

    How to compile dar and libdar

    There are two ways of compiling dar. The easy way and the less easy way:

    The easy way:
    This is when you grab the latest source package (file in the format dar-X.Y.Z.tar.gz where X.Y.Z is the dar version) and follow the requirements just below. The easy way also applies to interim releases when they exist, which are also fully supported and production grade software.
    The less easy way:
    This is when you grab the source code from a GIT repositories. You will then have to follow the preliminary steps, then continue following the easy way.

    Requirements

    To compile dar from a source package you need at least the following:

    1. a C++ compiler supporting C++14 syntax (for gcc that means version 6.1. minimum, clang is also supported). For version older than 2.7.0 only C++11 is required (at least gcc version 4.9), while version older than dar-2.5.0 C++11 support is not required but if using gcc you need at least version 3.4.x. A standard C++ library is also required, compilation has been tested with libc++ and stdlibc++.
    2. a linker like "ld" the GNU Linker
    3. the make program (tested with gnu make)
    4. pkg-config to help detecting and configuring proper CFLAGS/CXXFLAGS and LDFLAGS for optional libraries dar may relies on (see below)

    In option you may also have installed the following tools and libraries:

    • libz library for gzip compression support
    • libbzip2 library for bzip2 compression support
    • liblzo2 library for lzo compression support
    • libxz library for xz/lzma compression support
    • Zstandard library (version greater or equal to 1.3.0) for zstd compression support
    • LZ4 library for lz4 compression support
    • gnu Getopt support (Linux has it for all distro thanks to its glibc, this is not true for FreeBSD for example)
    • libgcryt version 1.4.0 or greater for symetric strong encryption (blowfish, aes, etc.) and hash (sha1, md5) support
    • gpgme library version 1.2.0 or greater for asymetric strong encryption and signature (RSA, DSA, etc.)
    • doxygen for generation of source code documentation
    • dot to generate graphes of class the C++ class hierarchy withing the doxygen documentation
    • upx to generate dar_suite upx compressed binaries
    • groff to generate html version of man pages
    • ext2/ext3/ext4 file system libraries for Linux Filesystem Specific Attributes and nodump flag support
    • libthreadar (version 1.3.0 or more recent, for MAC OS use version 1.4.0 or more recent) for libdar to use several threads and remote repository feature (the later needs libcurl in addition see below)
    • librsync for delta binary
    • libcurl for remote repository access using ftp or sftp protocols
    • python3 (python3-dev) and pybind11 to access libdar from python
    • libargon2 provides a security enhancement for key derivation fonction (strong encryption)

    Requirements for the Optional Features

    Feature Requirements
    zlib compression libz library headers and library
    bzip2 compression libbzip2 library headers and library
    lzo compression liblzo2 library headers and library
    xz/lzma compression libxz library headers and library
    zstd compression libzstd library headers and library
    lz4 compression liblz4 library headers and library
    strong symmetric encryption libgcrypt library headers and library
    strong asymmetric encryption libgcrypt library headers and library
    libgpgme library headers and library
    documentation building the doxygen and
    and dot executables at compilation time
    upx-compressed dar binaries the upx executable at compilation time
    man page in html format the groff executable at compilation time
    save/restore Linux File-system Specific Attributes libext2fs library headers and library
    dar's --nodump option support libext2fs library headers and library
    remote repositories support (ftp, sftp) libthreadar library headers and library
    libcurl library headers and library
    binary delta support librsync library headers and library
    multi-thread compression/decompression libthreadar library headers and library
    multi-thread ciphering/deciphering libthreadar library headers and library
    python 3 API pybind11 headers and library
    key derivation function based on argon2 algorithm libargon2 headers and library

    Dependencies in distro packages

    For simplicity, here follows the package names that provide the previously mentionned libdar dependencies. If you have the equivalent names for other distro, feel free to contact the dar's manager maintainer for this table to be updated.

    Distro Debian/Devuan/Ubuntu
    pkg-config tool pkg-config
    libz library zlib1g-dev
    libbzip2 library libbz2-dev
    liblzo2 library liblzo2-dev
    libxz library liblzma-dev
    libzstd library libzstd-dev
    liblz4 library liblz4-dev
    libgcrypt library libgcrypt-dev
    libgpgme library libgpgme-dev
    doxygen binary doxygen
    dot binary graphviz
    upx binary upx
    groff binary groff
    libext2fs library libext2fs-dev
    libthreadar library libthreadar-dev
    librsync library librsync-dev
    libcurl library libcurl-dev
    pybind11 library python3-pybind11
    python3-dev
    libargon2 library libargon2-dev

    Compilation Process

    Once you have the minimum requirements, Dar has to be compiled from source code in the following way:

    ./configure [eventually with some options] make make install-strip
    Important:

    due to a bug in the autoconf/libtool softwares used to build the configure script you must not have spaces in the name of the path where are extracted dar' sources. You can install dar binary anywhere you want, the problem does not concern dar itself but the ./configure script used to build dar: To work properly it must not be ran from a path which has a space in it.

    Important too:

    By default the configure script set optimization to -O2, depending on the compiler this may lead to problems in the resulting binary (or even in the compilation process), before reporting a bug try first to compile with less optimization:

    CXXFLAGS=-O export CXXFLAGS make clean distclean ./configure [options...] make make install-strip

    The configure script may receive several options (listed here), in particular the --prefix option let you install dar and libdar in another place than the default /usr/local. For example to have dar installed under /usr use the following: ./configure --prefix=/usr. You will be able to uninstall dar/libdar by calling make uninstall this implies keeping the source package directory around and using the same option given to ./configure as what has been used at installation time.

    If you prefer building a package without installing dar, which is espetially suitable for package maintainers, the DESTDIR variable may be set at installation time to install dar in another root directory. This makes the creation of dar binary packages very easy. Here is an example:

    ./configure --prefix=/usr [eventually with some options] make make DESTDIR=/some/where install-strip

    As result of the previous example, dar will be installed in /some/where/usr/{bin | lib | ...} directories, but built as if it was installed in /usr, thus it will look for /etc/darrc and not /some/where/etc/darrc. You can thus build a package from files under /some/where by mean of a pakage management tool, then install/remove this package with the distro package management tools.

    Options for the configure script

    Available options for the configure script

    Optimization option:
    --enable-mode

    --enable-mode=32 or --enable-mode=infinint

    if set,replace 64 bits integers used by default by 32 bits integers or "infinint" integers (limitless integers). Before release 2.6.0 the default integer used was infinint and this option was added for speed optimization at the cost of some limitations (See the limitations for more).

    Since release 2.6.0, the default is 64 bits integers (limitations stay the same) instead of infinint. But if you hit the 64 bits integer limitations you can still use infinint which overcome this at the cost of slower performance and increase memory requirement.

    Deactivation options:
    --disable-largefile Whatever your system is, dar will not be able to handle file of size larger than 4GB
    --disable-ea-support Whatever your system is, dar will not be able to save or restore Extended Attributes (see the Notes paragraphs I and V)
    --disable-nodump-flag Whatever your system is, dar will not be able to take care of the nodump-flag (thanks to the --nodump option)
    --disable-linux-statx Even if your system provides the statx() system call, dar will ignore it and will not save birthtime of files as Linux FSA. At restoration time you will not be bothered by the warning telling you that birthime is not possible to be restored under Linux, if you still want to restore other Linux FSA.
    --disable-dar-static dar_static binary (statically linked version of dar) will not be built
    --disable-special-alloc dar uses a special allocation scheme by default (gather the many small allocations in big fewer ones), this improves dar's execution speed
    --disable-upx If upx is found in the PATH, binary are upx compressed at installation step. This can be disabled by this option, when upx is available and you don't want compressed binaries.
    --disable-gnugetopt on non GNU systems (Solaris, etc.) configure looks for libgnugetopt to have the long options support thanks to the gnu getopt_long() call, this can be disabled.
    --disable-thread-safe libdar may need POSIX mutex to be thread safe. If you don't want libdar relaying on POSIX mutex even if they are available, use this option. The resulting library may not be thread safe. But it will always be thread safe if you use --disable-special-alloc, and it will never be thread safe if --enable-test-memory is used.
    --disable-libdl-linking Ignore any libdl library and avoid linking with it
    --disable-libz-linking Disable linking to libz, thus -zgzip:* option (gzip compression) will not be available
    --disable-libbz2-linking Disable linking to libbz2, thus -zbzip2:* option (libbz2 compression) will not be available
    --disable-liblzo2-linking Disable linking to liblzo2, thus -zlzo:* option (lzo compression) will not be available
    --disable-libxz-linking Disable linking to liblzma5 this -zxz:* option (xz compression) will not be available
    --disable-libgcrypt-linking Disable linking with libgcrypt library. Strong encryption will not be available neither a hashing of generated slices.
    --disable-gpgme-linking Disable linking with gpgme library. Asymetric strong encryption algorithms will not be available
    --disable-build-html Do not build API documentation reference with Doxygen (when it is available)
    --disable-furtive-read Do not try to detect whether the system does support furtive read mode. This will lead furtive read mode to stay disabled in any case.
    --disable-fast-dir Disable optimization for large directories, doing so has a little positive impact on memory requirement but a huge drawback on execution time
    --disable-execinfo Disable reporting stack information on self diagnostic bugs even
    --disable-threadar Avoid linking with libthreadar even if available, libdar will not create threads
    --disable-birthtime Disable the HFS+ Filesystem Specific Attribute support
    --disable-librsync-linking Disable linking with librsync, thus delta binary will not be available
    --disable-libcurl-linking Disable linking with libcurl, thus remote repository support using ftp or sftp will not be available
    --enable-limit-time-accuracy={s|us|ns} Limit the timestamp precision of files to seconds, microseconds (lowercase U not the μ greek letter for μs) and nanoseconds respectively, by default dar uses the maximum time precision supported by the operating system.
    Troubleshooting option:
    --enable-os-bits If set, dar uses the given argument (32 or 64) to determine which integer type to use. This much match your CPU register size. By default dar uses the system <stdint.h> file to determine the correct integer type to use
    Debugging options:
    --enable-examples If set, example programs based on infinint will also be built
    --enable-debug If set, use debug compilation option, and if possible statically link binaries
    --enable-pedantic If set, transmits the -pedantic option to the compiler
    --enable-profiling Enable executable profiling
    --enable-debug-memory If set, logs all memory allocations and releases to /tmp/dar_debug_mem_allocation.txt. The resulting executable is expected to be very slow

    GIT

    Presentation

    To manage its source code versions DAR uses GIT (it used CVS up to Q1 2012 and even the older RCS in 2002). Since September 2nd 2017, the GIT repository has been cloned to GitHub while the original repository at Sourceforge still stays updated. Both should contain the exact same code, that's up to you to choose the repository you prefer. For sanity also, starting September 3rd 2017, git tags including those for release candidates will be signed with GPG.

    Dar's repository Organization

    GIT (more than CVS) eases the use of branches. In dar repository, there are thus a lot of them: the first and main one is called "master". It contains current development and most probably unstable code. There are other permanent branches that hold stable code. They are all named by "branch_A.B.x" where A and B are the numbers corresponding to a released versions family. For example, "branch_2.6.x" holds the stable code for releases 2.6.0, 2.6.1, 2.6.2 and so on. It also holds pending fixes for the next release on that branch you might be interested in.

    The global organisation of the repository is thus the following:

      (HEAD of "master" branch)   new feature 101   |   ^   |   new feature 100   |   ^   |   new feature 99   |   +--->-- fix 901 ->- fix 902 (release 2.4.1) ->- fix 903 ->- fix 904 (release 2.4.2) ->- fix 905 (HEAD of branch_2.4.x)   |   new feature 98   |   ^   |   new feature 97   |   +--->-- fix 801 ->- fix 802 (release 2.3.1) (also HEAD of branch_2.3.x as no pending fix is waiting for release)   |   ...   |   ^   |   initial version

    Usage

    To get dar source code from GIT you have first to clone the repository hosted at sourceforge:

    git clone https://github.com/Edrusb/DAR.git cd DAR

    You will probably not want to use current development code so you have to change from the branch master to the branch "branch_A.B.x" of you choice:

    git checkout branch_2.6.x

    That's all. You now have the most recent stable code (for branch_2.6.x in this example). To see what changes have been brought since the last release, use the following command:

    git log

    If you plan to keep the repository you've cloned, updating the change is as easy as using (no need to clone the repository from scratch again):

    git pull

    There is also a web interface to git at github

    Having the sources ready for compilation

    Please read the fiile named USING_SOURCE_FROM_GIT located at the root of the directory tree you retrieved through GIT, it contains up to date information about the required tools and how to generate the configuration file. Then you can proceed to source compilation as done with regular source package

    Related Softwares

    dar-2.7.17/doc/Features.html0000644000175000017520000012375014740171676012537 00000000000000 DAR's FEATURES
    Dar Documentation

    DAR's FEATURES

    This table lists the main features of dar/libdar tool. For each feature an overview is presented with some pointers you are welcome to follow for a more detailed information.

    HARD LINK CONSIDERATION

    hard links are properly saved in any case and properly restored if possible. For example, if restoring across a mounted file system, hard linking will fail, but dar will then duplicate the inode and file contents, issuing a warning. Hard link support includes the following inode types: plain files, char devices, block devices, symlinks (Yes, you can hard link symbolic links! Thanks to Wesley Leggette for the info ;-) )



    SPARSE FILES references: man dar

    --sparse-file-min-size, -ah

    By default Dar takes care of sparse files, even if the underlying filesystem does not support sparse files(!).

    When a long sequence of zeroed bytes is met in a file during backup, those are not stored into the backup file but the number of zeroed bytes is stored instead (structure known as a "hole"). When comes the time to restore that file, dar restores the normal data but when a hole is met in the backup, dar directly skips at the position of the data following that hole. If the underlying filesystem supports sparse files, this will (re)create a hole in the restored file, making a sparse file.

    Sparse files can report to be several hundred gigabytes large while they need only a few bytes of disk space. Not being able to properly save and restore them can lead to storage waste to hold backups, but also to the impossibility to restore your data on a disk of the same size.



    EXTENDED ATTRIBUTES (EA) references: man dar
    keywords: -u -U -am -ae --alter=list-ea

    Dar is able to save and restore EA, all or just those matching a given pattern.

    File Forks (MacOS X) are implemented over EA as well as Linux's ACL, they are thus transparently saved, tested, compared and restored by dar. Note that ACL under MacOS seem to not rely on EA, thus while they are marginally used they are ignored by dar



    FILESYSTEM SPECIFIC ATTRIBUTES (FSA) references: man dar
    keyword: --fsa-family

    Since release 2.5.0 dar is able to take care of filesystem specific attributes. Those are grouped by family strongly linked to the filesystem they have been read from, but perpendicularly each FSA is designated also by a function. This way it is possible to translate FSA from a filesystem into another filesystem when there is a equivalency in role.

    currently two families are present:

    • HFS+ family contains only one function : the birthtime. In addition to ctime, mtime and atime, dar can backup, compare and restore all four dates of a given inode (well, ctime is not possible to restore).
    • extX family contains 12 functions (append_only, compressed, no_dump, immutable, journaling, secure_deletion, no_tail_merging, undeletable, noatime_update, synchronous_directory, synchronous_update, top_of_dir_hierarchy) found on ext2/3/4 and some other Linux filesystems. Dar can thus save and restore all of those for each file depending on the capabilities or permissions dar has at restoration time.


    DIRTY FILES references: man dar

    keywords: --dirty-behavior , --retry-on-change

    At backup time, dar checks whether each saved file had not changed at the time it was read. If a file has changed in that situation, dar retries saving it up to three times (by default) and if it is still changing, is flagged as "dirty" in the backup, and handled differently from other files at restoration time. The dirty file handling is either to warn the user before restoring, to ignore and avoid restoring them, or to ignore the dirty flag and restore them normally.



    FILTERS references: man dar command line usage notes

    keywords: -I -X -P -g -[ -] -am --exclude-by-ea

    dar is able to backup from a total file system to a single file, thanks to its filter mechanism. This one is dual headed: The first head let one decide which part of a directory tree to consider for the operation (backup, restoration, etc.) while the second head defines which type of file to consider (filter only based on filename, like for example the extension of the file).

    For backup operation, files and directories can also be filtered out if they have been set with a given user defined EA.



    NODUMP FLAG references: man dar

    keywords: --nodump

    Many filesystems, like ext2/3/4 filesystems provide for each inodes a set of flags, among which is the "nodump" flag. You can instruct dar to avoid saving files that have this flag set, as does the so-called dump backup program.



    ONE FILESYSTEM references: man dar

    keywords: -M

    By default dar does not stop at filesystems boundaries unless the filtering mechanism described above excludes a mount point. But you can also ask dar to avoid recursing into a given filesystem, or at the opposite a list of filesystems to only recurse into, without the burden of finding and listing the directories to be excluded from the backup, which can be even more complicated when bind mount are used (i.e. a given filesystem mounted several times).



    CACHE DIRECTORY TAGGING STANDARD references: man dar

    keywords: --cache-directory-tagging

    Many software use cache directories (mozilla web browser for example), directories where is stored temporaneous data that is not interesting to backup. The Cache Directory Tagging Standard provides a standard way for software applications to identify this type of data, which let dar (like some other backup softwares) ignore cache data designated as such by other applications.



    DIFFERENTIAL BACKUP references: man dar/TUTORIAL

    keywords: -A

    When making a backup with dar, you have the possibility to make a full backup or a differential backup.

    A full backup, as expected, makes backup of all files as specified with the optional filtering mechanisms.

    Instead, a differential backup, saves only files that have changed since a given reference backup. Additionally, files that existed in the reference backup and which do no more exist at the time of the differential backup are recorded in the backup as "been removed". At recovery time, (unless you deactivate it), restoring a differential backup will update changed files and new files, but also remove files that have been recorded as "been removed".

    Note that the reference backup can be a full backup or another differential backup (this second method is usually designed as incremental backup). This way you can make a first full backup, then many incremental backups, each taking as reference the last backup made, for example.



    DECREMENTAL BACKUP references: man dar / Decremental backup

    keywords: -+ -ad

    As opposed to incremental backups, where the older one is a full backup and each subsequent backup contains only the changes from the previous backup, a decremental backup let the full backup be the more recent while the older ones only contain changes compared to the just more recent one.

    This has the advantage of providing a single backup to use to restore a whole system in its latest known state, while reducing the overall amount of data to retain older versions of files (same amount required as with differential backup). It has also the advantage to not have to keep several set of backup as you just need to delete the oldest backup when you need storage space. However it has the default to require at each new cycle the creation of a full backup, then the transformation of the previous full backup into a so-called decremental backup. Yes, everything has a cost!



    DELTA BINARY references: man dar

    keywords: --delta sig, --include-delta-sig, --exclude-delta-sig, --delta-sig-min-size, --delta no-patch

    Since release 2.6.0, for incremental and differential backups only, instead of saving an entire whole file when it has changed, dar/libdar provides the ability to save only the part that has changed in it. This feature called binary delta relies on librsync library. It is not activated by default considering the non null probability of collision between two different versions of a file. This is also the choice of the dar user community.

    However it gives you one step further the differential backup, in terms of backup space optimization and network data transfer reduction.



    PREVENTING ROOTKITS AND OTHER MALWARES references:man dar

    keywords: -asecu

    At backup time when a differential, incremental or decremental backup is done, dar compares the status of inode on the filesystem to the status they had at the time of the last backup. If the ctime of a file has changed while no other inode field changed, dar issues a warning considering that file as suspicious. This does not mean that your system has been compromised but you are strongly advised to check whether the concerned file has recently been updated (Some package manager may lead to that situation) or has its Extended Attributes changed since last backup was made. In normal situation this type of warning does not show often (false positive are rare but possible). However in case your system has been infected by a virus, compromised by a rootkit or by a trojan, dar will signal the problem if the intruder tried to hide its forfait.



    DIRECTORY TREE SNAPSHOT references: man dar

    keywords: -A +

    Dar can make a snapshot of a directory tree and files or even of a whole system, recording the inode status of each files. This may be used to detect changes in filesystem, by "diffing" the resulting snapshot with the filesystem at a later time. The resulting snapshot can also be used as reference to save files that have changed since the snapshot has been done.

    A snapshot is just a special dar backup, that is very small compared to the corresponding full backup but of course, it cannot be used to restore any data. As a dar backup, it can be created using compressed, slices, encryption...



    SLICES references: man dar/ TUTORIAL

    keywords: -s -S -p -aSI -abinary

    Dar stands for Disk ARchive. From the beginning it was designed to be able to split an archive (or backup) over several removable media whatever their number is and whatever their size is. To restore from such a splitted archive, dar will directly fetch the requested data in the correct slice(s). dar is suitable for backup over old floppy disk, CD-R, DVD-R, CD-RW, DVD-RW, Zip, Jazz, but also cloud computing, when some have restriction on the maximum size a file can have.

    Given the size, dar will split the archive/backup in several files (called SLICES), eventually pausing before creating the next one, and/or allowing the user to automate any action (like un/mount a medium, burn the file on CD-R, send it to the cloud, and so on)

    Additionally, the size of the first slice can be specified separately, if for example you want first to fulfill a partially filled disk before starting using empty ones. Last, at restoration time, dar will just pause and prompt the user asking a slice only if it is missing, and allowing here too user to automate any particular action (dowloading the slice from the cloud, mount/unmounting a removable media and so on).

    You can choose to have either more than one slice per medium without penalty from dar (no extra user interaction than asking the user to change the removable media when it has been read), or just one slice per medium or even a backup without slice, which is a single file, depending on your need.



    COMPRESSION references: man dar

    keywords: -z

    dar can use compression. Actually gzip, bzip2, lzo, xz/lzma, zstd, lz4 algorithms are available, and there is still room available for any other compression algorithm. Note that, compression is made before slicing, which means that using compression together with slices, will not make slices smaller, but will probably make less slices in the backup.



    SELECTIVE COMPRESSION references: man dar/ samples

    keywords: -Y -Z -m -am

    dar can be given a special filter that determines which files will be compressed or not. This way you can speed up the backup operation by not trying to compress *.mp3, *.mpg, *.zip, *.gz and other already compressed files, for example. Moreover another mechanism allows you to say that files below a given size (whatever their name is) will not be compressed.



    STRONG ENCRYPTION references: man dar

    keywords: -K -J -# -* blowfish, twofish, aes256, serpent256, camellia256, --kdf-param

    Dar can use blowfish, twofish, aes256, serpent256 and camellia256 algorithms to encrypt the whole backup. Two "elastic buffers" are inserted and encrypted with the rest of the data, one at the beginning and one at the end of the archive to prevent a clear text attack or codebook attack.

    For symmetric key encryption several Key Derivation Functions are available, from the legacy PBKDF2 (PKCS#5 v2) to the modern Argon2 algorithm. The user has the possibility to set the hash algorithm for the first and the interation count for both algorithms.



    PUBLIC KEY ENCRYPTION references: man dar

    keywords: -K, --key-length

    Encryption based on GPG public key is available. A given backup can be encrypted for a recipient (or several recipients without visible overhead) using its public key. Only the recipient(s) will be able to read such encrypted backup.

    The advantage over ciphering the backup as a whole is that you don't have to uncipher it all to extract a particular file or set of file, which brings a huge gain of CPU usage and execution time.



    PRIVATE KEY SIGNATURE references: man dar

    keywords: --sign

    When using encryption with public key it is possible in addition to sign an archive with your own private key(s). Your recipients can then be sure the archive has been generated by you, dar will check the signature validity against the corresponding public key(s) each time the archive is used (restoration, testing, etc.) and a warning is issued if signature does not match or key is missing to verify the signature. You can also have the list of signatories of the archive while listing the archive content.



    SLICE HASHING references: man dar

    --hash, md5, sha1, sha512

    When creating a backup, dar can compute an md5, sha1 or sha512 hash before the backup is even written to disk and produce a small file compatible with md5sum, sha1sum or sha512sum that let verify that the medium has not corrupted the slices of the backup.



    DATA PROTECTION references: man dar / Parchive integration

    keywords: -al

    Dar is able to detect corruption in any part of a dar backup, but it cannot fix it.

    Dar relies on the Parchive program for data protection against media errors. Thanks to dar's ability to run user command or script and thanks to the ad hoc provided scripts, dar can use Parchive as simply as adding a word (par2) on command-line. Depending on the context (backup, restoration, testing, ...), dar will by this mean create parity data for each slice, verify and if necessary repair the archive slices.

    Without Parchive, dar can workaround a corruption, skipping the concerned file and restoring all others. For some more vital part of the backup files, like the "catalog" which is the table of contents, dar has the ability to use an isolated catalog at rescue of the internal catalog of the corrupted backup. It can also make use of tape marks that are used inside the backup for sequential reading as a way to overcome catalog corruption. The other vital information is the slice layout which is replicated in each slice and let dar overcome data corruption of that part too. As a last resort, Dar also proposes a "lax" mode in which the user is asked questions (like the compression algorithm used, ...) to help dar recover very corrupted archives and in which, many sanity checks are turned into warnings instead of aborting the operation. However this does not replace using Parchive. This "lax" mode has to be considered as the last resort option.



    TRUNCATED ARCHIVE/BACKUP REPARATION reference: man dar

    keyword: -y

    Since version 2.6.0 an truncated archive (due to lack of disk space, power outage, or any other reason) can be repaired. A truncated archive lacks a table of content which is located at the end of the archive, without it you cannot know what file is saved and where to fetch its data from, unless you use the sequential reading mode which is slow as it implies reading the whole archive even for restoring just one file. To allow sequential reading of an archive, which is suitable for tape media, some metadata is by default inserted all along the archive. This metadata is globally the same information that should contain the missing table fo content, but spread by pieces all along the archive. Reparing an archive consists of gathering this inlined metadata and adding it at the end of the repaired archive to allow direct access mode (default mode) which is fast and efficient.



    DIRECT ACCESS

    Even using compression and/or encryption dar has not to read the whole backup to extract one file. This way if you just want to restore one file from a huge backup, the process will be very quick. Dar first reads the catalogue (i.e. the contents of the backup), then it goes directly to the location of the saved file(s) you want to restore and then proceeds to restoration. In particular using slices, dar will ask only for the slice(s) containing the file(s) to restore.

    Since version 2.6.0 dar can also read a backup from a remote host by mean of FTP or SFTP. Here too dar can leverage its direct access ability to only download the necessary stuff in order to restore some files from a large backup, or list the backup content and even compare a set of file with the live filesystem.



    SEQUENTIAL ACCESS references: man dar
    (suitable for tapes) --sequential-read, -at

    The direct access feature seen above is well adapted to random access media like disks, but not for tapes. Since release 2.4.0, dar provides a sequential mode in which dar sequentially read and write archives. It has the advantage to be efficient with tape but suffers from the same drawback as tar archive: it is slow to restore a single file from a huge archive. The second advantage is to be able to repair a truncated archive (lack of disk space, power outage, ...) as described above.



    MULTI-VOLUME TAPES references: man dar_split

    keywords: --sequential-read

    The independant dar_split program provides a mean to output dar but also tar archives to several tapes. If takes care of splitting the archive when writing to tapes and gather pieces of archive from several tapes for dar/tar to work as if it was a single pieced archive.



    ARCHIVE/BACKUP TESTING references: man dar / TUTORIAL / Good Backup Practice

    keywords: -t

    thanks to CRC (cyclic redundancy checks), dar is able to detect data corruption in a backup. Only the file where data corruption occurred will not be possible to restore, but dar will restore the others even when compression or encryption (or both) is used.



    ISOLATION references: man dar

    keywords: -C -A -@

    The catalogue (i.e.: the contents of a backup), can be extracted as a copy (this operation is called isolation) to a small file, that can in turn be used as reference for differential backup and as rescue of the internal catalogue (in case of backup corruption).

    There is then no need to provide a backup to be able to create a differential backup based on it, just its isolated catalogue can be used instead. Such an isolated catalogue



    FLAT RESTORATION references: man dar

    keywords: -f

    It is possible to restore any file without restoring the directories and subdirectories it was in at the time of the backup. If this option is activated, all files will be restored in the (-R) root directory whatever their real position is recorded inside the backup.



    USER COMMAND BETWEEN SLICES references: man dar dar_slave dar_xform / command line usage notes

    keywords: -E -F -~

    several hooks are provided for dar to call a given command once a slice has been written or before reading a slice. Several macros allow the user command or script to know the requested slice number, path and backup basename.



    USER COMMAND BEFORE AND AFTER SAVING A DIRECTORY OR A FILE references: man dar / command line usage notes

    keywords: -< -> -=

    It is possible to define a set of file that will have a command executed before dar starts saving them and once dar has completed saved them. Before entering a directory dar will call the specified user command, then it will proceed to the backup of that directory. Once the whole directory has been saved, dar will call again the same user command again (with slightly different arguments) and then continue the backup process. Such user command may for example run a particular command which output will be redirected to a file of that directory, suitable for backup. Another purpose is to force auto-mounting filesystems that else would not be visible and thus not saved.



    CONFIGURATION FILE references: man dar / conditional syntax and user targets

    keywords: -B

    dar can read parameters from file. This is a way to extends the command-line limited length input. A configuration file can ask dar to read (or to include) other configuration files. A simple but efficient mechanism forbids a file to include itself directly or not, and there is no limitation in the degree of recursion for the inclusion of configuration files.

    Two special configuration files $HOME/.darrc and /etc/darrc are read if they exist. They share the same syntax as any configuration file which is the syntax used on the command-line, eventually completed by newlines and comments.

    Any configuration file can also receive conditional statements, which describe which options are to be used in different conditions. Conditions are: "extract", "listing", "test", "diff", "create", "isolate", "merge", "reference", "auxiliary", "all", "default" (which may be useful in case or recursive inclusion of files) ... more about their meaning and use cases in dar man page.



    REMOTE OPERATIONS references: command line usage notes / man dar/dar_slave/dar_xform

    keywords: -i -o - -afile-auth

    dar is able to read and write a backup to a remote server in three different ways:

    1. dar is able to produce an backup to its standard output or to a named pipe and is able to read a backup from its standard input or from a named pipe
    2. if the previous approach is fine to write down a backup over the network (through an ssh session for example), reading from a remote sever that way (using a single pipe) requires dar to read the whole backup which may be inefficient to just restore a single file. For that reason, dar is also able to read a backup through a pair of pipes (or named pipes) using dar_slave at the other side of the pipes. From the pair of pipes, one pipe let dar asking to dar_slave which portion of the backup file it has to send through the other pipe. This makes a remote restoration much more efficient and still allows these bidirectional exchanges to be encrypted over the network, simply running dar_slave through an ssh session.
    3. last, since release 2.6.0 dar can make use FTP or SFTP protocols to read or write a backup from or to a remote server. This method does not rely on anonymous or named pipes, is as efficient as option 2 for reading a remote backup and is compatible with slicing and slice hashing. however this option is restricted to these two network protocols: FTP (low CPU usage but insecure) SFTP (secure)


    DAR MANAGER references: man dar_manager


    The advantage of differential backup is that it takes much less space to store and time to complete than always making full backup. But, in the other hand, it may lead you having a lot of them due to the reduces space requirements. Then if you want to restore a particular file, you may spend time to figure out in which backup is located the most recent version. To solve this, dar_manager gathers contents information of all your backups into a database (a Dar Manager Database which ends as a single file). At restoration time, it will call dar for you to restore the asked file(s) from the proper backup.



    RE-SHAPE SLICES OF AN EXISTING ARCHIVE/BACKUP references: man dar_xform


    The provided program named dar_xform is able to change the size of slices of a given backup. The resulting backup is totally identical to the one directly created by dar. Source backup can be taken from a set of slice, from standard input or even a named pipe. Note that dar_xform can work on encrypted and/or compressed data without having to decompress or even decrypt it.



    ARCHIVE/BACKUP MERGING references: man dar

    keywords: -+ -ak -A -@

    From version 2.3.0, dar supports the merging of two existing archives into a single one. This merging operation is assorted by the same filtering mechanism used for archive creation. This let the user define which file will be part of the resulting archive.

    By extension, archive merging can also take as single source archive as input. This may sound a bit strange at first, but this let you make a subset of a given archive without having to extract any file to disk. In particular, if your filesystem does not support Extended Attributes (EA), thanks to this feature you can still cleanup an archive from files you do not want to keep anymore without loosing any EA or performing any change to standard file attributes (like modification dates for example) of files that will stay in the resulting archive.

    Last, this merging feature give you also the opportunity to change the compression level or algorithm used as well as the encryption algorithm and passphrase. Of course, from a pair of source archive you can do all these sub features at the same time: filtering out files you do not want in the resulting archive, use a different compression level and algorithm or encryption password and algorithm than the source archive(s), you may also have a different archive slicing or no slicing at all (well dar_xform is more efficient for this feature only, see above "RE-SHAPE SLICES OF AN EXISTING ARCHIVE/BACKUP" for details).



    ARCHIVE SUBSETTING references: man dar

    keywords: -+ -ak

    As seen above under the "archive merging" feature description, it is possible to define a subset of files from an archive and put them into a new archive without having to really extract these files to disk. To speed up the process, it is also possible to avoid uncompressing/recompressing files that are kept in the resulting archive or change their compression, as well change the encryption scheme used. Last, you may manipulate this way files and their EA while you don't have EA support available on your system.



    DRY-RUN EXECUTION references: man dar

    keywords: -e

    You can run any feature without effectively performing the action. Dar will report any problem but will not create, remove or modify any file.



    ARCHIVE/BACKUP USER COMMENTS references: man dar

    keywords: --user-comment, -l -v, -l -q

    The backup header can hold a message from the user. This message is never ciphered nor compressed and always available to any one listing the archive summary (-l and -q options). Several macro are available to add more confort using this option, like the current date, uid and gid, hostname, and command-line used at backup creation.



    PADDED ZEROS TO SLICE NUMBER references: man dar

    keywords: --min-digits

    Dar slice are numbered by integers starting by 1. Which makes filename of the following form: archive.1.dar, archive.2.dar, ..., archive.10.dar, etc. However, the lexicographical order used by many directory listing tools, is not adapted to show the slices in order. For that reason, dar let the user define how much zeros to add in the slice numbers to have usual file browsers listing slices as expected. For example, with 3 as minimum digit, the slice name would become: archive.001.dar, archive.002.dar, ... archive.010.dar.



    MULTI-THREADING references: man dar

    keywords: --multi-thread

    Since release 2.7.0, compression can use several threads when the new compression per block is used (by opposition to the streaming compression used so far, which is still available). Encryption can also be processed with multiple threads even for old backups (no change at encryption level). The user defines the number of threads he wants for each process, compression/decompression as well as ciphering/deciphering.

    dar-2.7.17/doc/usage_notes.html0000644000175000017520000026557314740171677013310 00000000000000 DAR's Usage Notes  
    Dar Documentation

    Command-line Usage Notes

    Introduction

    You will find here a collection of example of use cases for several features of dar suite command-line tools.

    Dar and remote backup

    This topic has for objective to show the different methods available to perform a remote backup (a backup of a system using a remote storage). It does not describe the remote storage itself, nor the way to access it, but the common ways to do so. For a precise description/recipies on how to use dar with ssh, netcat, ftp or sftp, see the topics following this one.

    Between these two hosts, we could also use NFS and we could use dar as usually, eventually adding an IPSEC VPN if the underlying network would not be secur (backup over Internet, ...), there is nothing very complicated and this is a valid solution.

    We could also split the backup in very small slices (using dar's -s and eventually -S option) slices that would be moved to/from the storage before the backup process to continue creating/reading the next one. We could even make use of one or more of the dar's -E -F and -~ options to automate the process and get a pretty viable backup process.

    But what if for any reasons these previous methods were not acceptable for our use context?

    As a last resort, we can leverage the fact that dar can use its standard input and output to work, and pipe these to any arbitrary command giving us the greatest freedom available. In the following we will find list two different ways to do so:

    1. single pipe
    2. dual pipes

    Single pipe

    Full Backup

    dar can output its archive to its standard output instead of a given file. To activate it, use "-" as basename. Here is an example:

    dar -c - -R / -z | some_program

    or

    dar -c - -R / -z > named_pipe_or_file

    Note, that file splitting is not available as it has not much meaning when writing to a pipe. At the other end of the pipe (on the remote host), the data can be redirected to a file, with proper filename (something that matches "*.1.dar").

    some_other_program > backup_name.1.dar

    It is also possible to redirect the output to dar_xform which can in turn, on the remote host, split the data flow into several slices, pausing between them if necessary, exactly as dar is able to do:

    some_other_program | dar_xform -s 100M - backup_name

    this will create backup_name.1.dar, backup_name.2.dar and so on. The resulting archive is totally compatible with those directly generated by dar.

    some_program and some_other_program can be anything you want.

    Restoration

    For restoration, the process implies dar to read an archive from a pipe, which is possible adding the --sequential-read option. This has however a drawback compared to the normal way dar behaves as it cannot anymore seek to where is locarted one's file data but has to sequentially read the whole backup (same way tar behaves), the only consequence is a longer processing time espetially when restoring only a few files.

    On the storage host, we would use:

    dar_xform backup_name - | some_other_program # or if archive is composed of a single slice some_other_program < backup_name.1.dar

    While on the host to restore we would use:

    some_program | dar -x - --sequential-read ...other options...

    Differential/incremental Backup

    Here with a single pipe, the only possible way is to rely on the operation of catalogue isolation. This operation can be performed on the storage host and the resulting isolated catalogue can the be transferted through a pipe back to the host to backup. But there is a better way: on-fly isolation.

    dar -c - -R / -z -@ isolated_full_catalogue | some_program

    This will produce a small file named isolated_full_catalogue.1.dar on the local host (the host to backup), something we can then use to create a differential/incremental backup:

    dar -c - -R / -z -@ isolated_diff_catalgue -A isolated_full_catalogue | some_program

    We can then remove the isolated_full_catalogue.1.dar and keep the new isolated_diff_catalogue to proceed further for incremental backups. For differential backup, we would keep isolated_full_catalogue.1.dar and would use the -@ option to create an on-fly isolated catalogue only when creating the full backup.

    The restoration process here is not different from what we saw above for the full backup. We will restore the full backup, then the differential and incremental, following their order of creation.

    Dual pipes

    To overcome the limited performance met when reading an archive using a single pipe, we can use a pair of pipes instead and rely on dar_slave on the remote storage host.

    If we specify "-" as the backup basename for a reading operation (-l, -t, -d, -x, or to -A when used with -C or -c), dar and dar_slave will use their standard input and output to communicate. The input of the first is expect to receive the output of the second and vice versa.

    We could test this with a pair of named pipes todar and toslave and use shell redirection on dar and dar_slave to make the glue. But this will not work due to the shell behavior: dar and dar_slave would get blocked upon opening of the first named pipe, waiting for the peer to open it also, even before they have started (dead lock at shell level).

    To overcome this issue met with named pipes, there is -i and -o options that help: they receive a filename as argument, which may be a named pipe. The argument provided to -i is used instead of stdin and the one  provided to -o is used instead of stdout. Note that -i and -o options are only available if "-" is used as basename. Let's take an example:

    Let's assume we want to restore an archive from the remote backup server. Thus there we have to run dar_slave this way:

    mkfifo /tmp/todar /tmp/toslave dar_slave -o /tmp/todar -i /tmp/toslave backup_name some_program_remote < /tmp/todar some_other_program_remote > /tmp/toslave

    we assume some_program_remote to read the data /tmp/todar and making it available to the host we want to restore for dar to be able to read it, while some_other_program_remote receive the output from dar and write it to /tmp/toslave.

    On the local host you have to run dar this way:

    mkfifo /tmp/todar /tmp/toslave dar -x - -i /tmp/todar -o /tmp/toslave -v ... some_program_local > /tmp/todar some_other_program_local < /tmp/toslave

    having here some_program_local communicating with some_program_remote and writes the data received from dar_slave to the /tmp/todar named pipe. While in the other direction dar's output is read by some_other_program_local from /tmp/toslave then sent it (by a way that is out of the scope of this document) to some_other_program_remote that in turn makes it available to dar_slave as seen above.

    This applies also to differential backups when it comes to read the archive of reference by mean of -A option. In the previous single pipe context, we used an isolated catalogue. We can still do the same here, but can also leverage this feature espetially when it comes to binary delta that imply reading the delta signature in addition to the metadata, something not possible with --sequential-read mode: We then come to this following architecture:

      LOCAL HOST REMOTE HOST  +-----------------+ +-----------------------------+  | filesystem | | backup of reference |  | | | | | |  | | | | | |  | V | | V |  | +-----+ | backup of reference | +-----------+ |  | | DAR |--<-]=========================[-<--| DAR_SLAVE | |  | | |-->-]=========================[->--| | |  | +-----+ | orders to dar_slave | +-----------+ |  | | | | +-----------+ |  | +--->---]=========================[->--| DAR_XFORM |--> backup|  | | saved data | +-----------+ to slices|  +-----------------+ +-----------------------------+

    with dar on localhost using the following syntax, reading from a pair of fifo the reference archive (-A option) and producing the differential backup to its standard output:

    mkfifo /tmp/toslave /tmp/todar some_program_local > /tmp/todar some_other_program_local < /tmp/toslave dar -c - -A - -i /tmp/todar -o /tmp/toslave [...other options...] | some_third_program_local

    While dar_slave is run this way on the remote host:

    mkfifo /tmp/toslave /tmp/todar some_program_remote < /tmp/todar some_other_program_remote & gt; /tmp/toslave dar_slave -i /tmp/toslave -o /tmp/todar ref_backup

    last dar_xform receives the differential backup and here splits it into 1 giga slices adding a sha1 hash to each:

    some_third_program_remote | dar_xform -s 1G -3 sha1 - diff_backup

    dar and netcat

    the netcat (nc) program is a simple but insecure (no authentication, no data ciphering) approach to make link between dar and dar_slave or dar and dar_xform as presented in the previous topic.

    The context in which will take place the following examples is the one of a "local" host named "flower" has to be backup or restored form/to a remote host called "honey" (OK, the name of the machines are silly...)

    Creating a full backup

    on honey:

    nc -l -p 5000 > backup.1.dar

    then on flower:

    dar -c - -R / -z | nc -w 3 honey 5000

    but this will produce only one slice, instead you could use the following to have several slices on honey:

    nc -l -p 5000 | dar_xform -s 10M -S 5M -p - backup

    by the way note that dar_xform can also launch a user script between slices exactly the same way as dar does, thanks to the -E and -F options.

    Testing the archive

    Testing the archive can be done on honey, but diffing (comparison) implies reading the filesystem, of flower this it must be run there. Both operation as well as archive listing an other read operations can leverage what follows:

    on honey:

    nc -l -p 5000 | dar_slave backup | nc -l -p 5001

    then on flower:

    nc -w 3 honey 5001 | dar -t - | nc -w 3 honey 5000

    Note that here too dar_slave can run a script between slices, if for example you need to load slices from a tape robot, this can be done automatically, or if you just want to mount/unmount a removable media eject or load it and ask the user to change it or whatever else is your need.

    Comparing with original filesystem

    this is very similar to the previous example:

    on honey:

    nc -l -p 5000 | dar_slave backup | nc -l -p 5001

    while on flower:

    nc -w 3 honey 5001 | dar -d - -R / | nc -w 3 honey 5000

    Making a differential backup

    Here the problem is that dar needs two pipes to send orders and read data coming from dar_slave, and a third pipe to write out the new archive. This cannot be realized only with stdin and stdout as previously. Thus we will need a named pipe (created by the mkfifo command).

    On honey in two different terminals:

    nc -l -p 5000 | dar_slave backup | nc -l -p 5001 nc -l -p 5002 | dar_xform -s 10M -p - diff_backup

    Then on flower:

    mkfifo toslave nc -w 3 honey 5000 < toslave & nc -w 3 honey 5001 | dar -A - -o toslave -c - -R / -z | nc -w 3 honey 5002

    with netcat the data goes in clear over the network. You could use ssh instead if you want to have encryption over the network. The principle are the same let's see this now:

    Dar and ssh

    Creating full backup

    we assume you have a sshd daemon on flower. We can then run the following on honey:

    ssh flower dar -c - -R / -z > backup.1.dar

    Or still on honey:

    ssh flower dar -c - -R / -z | dar_xform -s 10M -S 5M -p - backup

    Testing the archive

    On honey:

    dar -t backup

    Comparing with original filesystem

    On flower:

    mkfifo todar toslave ssh honey dar_slave backup > todar < toslave & dar -d - -R / -i todar -o toslave

    Important: Depending on the shell you use, it may be necessary to invert the order in which "> todar" and "< toslave" are given on command line. The problem is that the shell hangs trying to open the pipes. Thanks to "/PeO" for his feedback.

    Or on honey:

    mkfifo todar toslave ssh flower dar -d - -R / > toslave < todar & dar_slave -i toslave -o todar backup

    Making a differential backup

    On flower:

    mkfifo todar toslave ssh honey dar_slave backup > todar < toslave &

    and on honey:

    ssh flower dar -c - -A - -i todar -o toslave > diff_linux.1.dar

    Or

    ssh flower dar -c - -A - -i todar -o toslave | dar_xform -s 10M -S 5M -p - diff_linux

    Integrated ssh support

    Since release 2.6.0, you can use an URL-like archive basename. Assuming you have slices test.1.dar, test.2.dar ... available in the directory Archive of an FTP of SFTP (ssh) server you could read, extract, list, test, ... that archive using the following syntax:

    dar -t ftp://login@ftp.server.some.where/Archive/example1 ...other options dar -t sftp//login:pass@sftp.server.some/where/Archive/example2 ...other options dar -t sftp//sftp.server.some/where/Archive/example2 -afile-auth ...other options

    Same thing with -l, -x, -A and -@ options. Note that you still need to provide the archive base name not a slice name as usually done with dar. This option is also compatible with slicing and slice hashing, which will be generated on remote server beside the slices:

     dar -c sftp://login:password@secured.server.some.where/Archive/day2/incremental \   -A ftp://login@ftp.server.some.where/Archive/CAT_test --hash sha512 \   -@ sftp://login2:password2@secured.server.some.where/Archive/day2/CAT_incremental \   <other options>

    By default if no password is given, dar asks the user interactively. If no login is used, dar assumes the login to be "anonymous". When you add the -afile-auth option, in absence of password on command-line, dar checks for a password in the file ~/.netrc for both FTP and SFTP protocols to avoid exposing password on command-line while still have non interactive backup. See man netrc for this common file's syntax. Using -afile-auth also activate public key authentication if all is set for that (~/.ssh/id_rsa ...)

    Comparing the different way to perform remote backup

    Since release 2.6.0 dar can directly use ftp or sftp to operate remotely. This new feature has sometime some advantage over the methods descibed above with ssh sometimes it has not, the objective here is to clarify the pro and cons of each method.

    Operation dar + dar_slave/dar_xform through ssh dar alone embedded sftp/ftp in dar
    Underlying mode of operation direct access mode sequential read mode direct access mode
    Backup
    • best solution if you want to keep a local copy of the backup or if you want to push the resulting archive to several destinations
    • if sftp not available, only ssh is
    • on-fly hash file is written locally (where is dar_xform ran) and is thus computed by dar_xform which cannot see network transmission errors
    • efficient but does not support slicing, for the rest this is an as good solution as with dar_xform
    • best solution if you do not have space on local disks to store the resulting backup
    • requires on-fly isolation to local disk if you want to feed a local dar_manager database with the new archive
    • if ssh not available, only sftp is
    • on-fly hash file is written to the remote directory beside the slice but calculated locally, which can be used to detect network transmission error
    Testing
    Diffing
    Listing
    • workaround if you hit the sftp known_hosts limitation
    • sftp not available only ssh
    • relies on dar <-> dar_slave exchanges which protocol is not designed for high latency exchanges and may give slow network performances in that situation
    • very slow as it requires reading the whole archive
    • maybe a simpler command line to execute
    • best solution if filtering a few files from a large archive dar will fetch over the network only the necessary data.
    • ssh not available only sftp
    Restoration
    • very slow as it requires reading the whole archive
    • efficient and simple
    • ssh not available only sftp
    Merging
    (should be done locally rather than over network if possible!!!)
    • complicated with the many pipes to setup
    • not supported!
    • not adapted if you need to feed the merging result to a local dar_manager database (on-fly isolation not available with merging with dar)
    Isolation
    • very slow as it requires reading the whole archive
    • efficient and simple, transfers the less possible data over the network
    • ssh not available only sftp
    Repairing
    (should be done locally rather than over network if possible!!!)
    • not supported!
    • propably the best way to repaire remotely for efficiency, as this operation uses sequential reading
    • ssh not available only sftp

    Bytes, bits, kilo, mega etc.

    Sorry by advance for the following school-like introduction to size prefix available with dar, but it seems that the metric system is (still) not taught in all countries leading some to ugly/erroneous writings... so let me remind what I've been told at school...

    You probably know a bit the metric system, where a dimension is expressed by a base unit (the meter for distance, the liter for volume, the Joule for energy, the Volt for electrical potential, the bar for pressure, the Watt for power, the second for time, etc.), and all eventually declined using prefixes:

    prefix (symbol) = ratio ================ deci (d) = 0.1 centi (c) = 0.01 milli (m) = 0.001 micro (μ) = 0.000,001 nano (n) = 0.000,000,001 pico (p) = 0.000,000,000,001 femto (f) = 0.000,000,000,000,001 atto (a) = 0.000,000,000,000,000,001 zepto (z) = 0.000,000,000,000,000,000,001 yocto (y) = 0.000,000,000,000,000,000,000,001 ronto (r) = 0.000,000,000,000,000,000,000,000,001 quecto (q) = 0.000,000,000,000,000,000,000,000,000,001 deca (da) = 10 hecto (h) = 100 kilo (k) = 1,000 (yes, this is a lower case letter, not an upper case! Uppercase letter 'K' is the Kelvin: temperature unit) mega (M) = 1,000,000 giga (G) = 1,000,000,000 tera (T) = 1,000,000,000,000 peta (P) = 1,000,000,000,000,000 exa (E) = 1,000,000,000,000,000,000 zetta (Z) = 1,000,000,000,000,000,000,000 yotta (Y) = 1,000,000,000,000,000,000,000,000 ronna (R) = 1,000,000,000,000,000,000,000,000,000 quetta (Q) = 1,000,000,000,000,000,000,000,000,000,000

    Not all prefix have been introduced at the same time, the oldest (c, d, m, da, h, k) exist since 1795, this explain the fact they are all lowercase and are not all power of 1000. Mega and micro have been added in 1873. The rest is much more recent (1960, 1975, 1991, 2022 according to Wikipedia)

    some other rules I had been told at school are:

    • the unit follows the number
    • a space has to be inserted between the number and the unit

    Thus instead of writing "4K hour" the correct writing is "4 kh" for four kilohour

    This way two milliseconds (noted "2 ms") are 0.002 second, and 5 kilometers (noted "5 km") are 5,000 meters. All was fine and nice up to the recent time when computer science appeared: In that discipline, the need to measure the size of information storage raised. The smallest size, is the bit (contraction of binary digit), binary because it has two possible states: "0" and "1". Grouping bits by 8 computer scientists called it a byte or also an octet.

    A byte having 256 different states (2 power 8) and when the ASCII (American Standard Code for Information Interchange) code arrived to assign a letter or more generally characters to the different values of a byte, ('A' is assigned to 65, space to 32, etc) and as as most text is composed of a set of character, they started to count information size in byte unit. Time after time, following technology evolution, memory size approached 1000 bytes.

    But as memory is accessed through a bus which is a fixed number of cables (or integrated circuits), on which only two possible voltages are authorized (to mean 0 or 1), the total amount of byte that a bus can address is always a power of 2 here too. With a two cable bus, you can have 4 values (00, 01, 10 and 11, where a digit is the state of a cable) so you can address 4 bytes.

    Giving a value to each cable defines an address to read or write in the memory. So when memory size approached 1000 bytes they could address 1024 bytes (2 power 10) and it was decided that a "kilobyte" would be that: 1024 bytes. Some time after, and by extension, a megabyte has been defined to be 1024 kilobytes, a gigabyte to be 1024 megabytes, etc. at the exception of the 1.44 MB floppy where here the capacity is 1440 kilobytes thus here "mega" means 1000 kilo...

    In parallel, in the telecommunications domain, going from analogical to digital signal made the bit to be used also. In place of the analogical signal, took place a flow of bits, representing the samples of the original signal. For telecommunications the problem was more a problem of size of flow: how much bit could be transmitted by second. At some ancient time appeared the 1200 bit by second, then 64000, also designed as 64 kbit/s. Thus here, kilo stays in the usual meaning of 1000 time the base unit. You can also find Ethernet 10 Mbit/s which is 10,000,000 and still today the latest 400 Gbit/s ethernet is 400,000,000,000 bits/s. Same thing with Token-Ring that had rates at 4, 16 or 100 Mbit/s (4,000,000 16,000,000 or 100,000,000 bits/s). But, even for telecommunications, kilo is not always 1000 times the base unit: the E1 bandwidth at 2Mbit/s for example, is in fact 32*64kbit/s thus 2048 kbit/s ... not 2000 kbit/s

    Anyway, back to dar and present time, you have to possibility to use the SI unit prefixes (k, M, T, P, E, Z, Y, R, Q) as number suffixes, like 10k for number 10,000 which, if convenient, is not correct regarding SI system rules but so frequently used today, that my now old school teachers would probably not complain too loudly ;^)

    In this suffix notation the base unit is implicitely the byte, giving thus the possibility to provide sizes in kilo, mega, tera, peta, exa, zetta, yotta, ronna or quetta byte, using by default the computer science definition of these terms: a power of 1024, which today corresponds to the kiB, MiB... unit symbols.

    These suffixes are for simplicity and to not have to compute how much make powers of 1024. For example, if you want to fill a CD-R you will have to use the "-s 650M" option which is equivalent to "-s 6815744400", choose the one you prefer, the result is the same :-).

    Now, if you want 2 Megabytes slices in the sense of the metric system, simply use "-s 2000000" but since version 2.2.0, you can alter the meaning of all these suffixes using the following --alter=SI-units option. (which can be shorten to -aSI or -asi):

    -aSI -s 2k

    Yes, and to make things more confused, marketing/sales arrived and made sellers count gigabits a third way: I remember some time ago, I bought a hard disk which was described as "2.1 GB", (OK, that's now long ago! ~ year 2000), but in fact it had only 2097152 bytes available. This is much below 2202009 bytes (= 2.1 GiB for computer science meaning), while a bit more than 2,000,000 bytes (metric system). OK, if it had these 2202009 bytes (computer science meaning of 2.1 GB), would this hard disk have been sold under the label "2.3 GB"!? ... just kidding :-)

    Note that to distinguish kilo, mega, tera and so on, new abbreviations are officially defined, but are not used within dar:

    ki = 1024 Mi = 1024*1024 Gi = and so on... Ti Pi Ei Zi Yi Ri Qi

    For example, we have 1 kiB for 1 kilobytes (= 1024 bytes), and 1 kibit for 1 kilobits (= 1024 bits) and 1 kB (= 1000 Bytes) and 1 kbit (= 1000 bits)...

    Running DAR in background

    DAR can be run in background this way:

    dar [command-line arguments] < /dev/null &

    Files' extension used

    dar suite programs may use several type of files:

    • slices (dar, dar_xform, dar_slave, dar_manager)
    • configuration files (dar, dar_xform, dar_slave)
    • databases (dar_manager)
    • user commands for slices (dar, dar_xform, dar_slave, using -E, -F or -~ options)
    • user commands for files (dar only, during the backup process using -= option)
    • filter lists (dar's -[ and -] options)

    If for slice the extension and even the filename format cannot be customized, (basename.slicenumber.dar) there is not mandatory rule for the other type of files.

    In the case you have no idea on how to name these, here is the extensions I use:

    • "*.dcf": Dar Configuration file, aka DCF files (used with dar's -B option)
    • "*.dmd": Dar Manager Database, aka DMD files (used with dar_manager's -B and -C options)
    • "*.duc": Dar User Command, aka DUC files (used with dar's -E, -F, -~ options)
    • "*.dbp": Dar Backup Preparation, aka DBP files (used with dar's -= option)
    • "*.dfl": Dar Filter List, aka DFL files (used with dar's -[ or -] options)

    but, you are totally free to use the filename you want! ;-)

    Running command or scripts from DAR

    You can run command from dar at two different places:

    • when dar has finished writing a slice only in backup, isolation or merging modes, or before dar needs a slice (DUC files), in reading mode (testing, diffing, extracting, ...) and when reading an archive of reference.
    • before and after saving a given file during the backup process (DBP files)

    Between slices

    This concerns -E, -F and -~ options. They all receive a string as argument. Thus, if the argument must be a command with its own arguments, you have to put these between quotes for they appear as a single string to the shell that interprets the dar command-line. For example if you want to call df . you have to use the following on DAR command-line:

    -E "df ."

    or

    -E 'df .'

    DAR provides several substitution strings in that context:

    • %% is replaced by a single % Thus if you need a % in you command line you MUST replace it by %% in the argument string of -E, -F or -~ options.
    • %p is replaced by the path to the slices
    • %b is replaced by the basename of the slices
    • %n is replaced by the number of the slice
    • %N is replaced by the number of the slice with padded zeros (it may differ from %n only when --min-digits option is used)
    • %c is replaced by the context which is either "operation", "init" or "last_slice" which values are explained below

    The number of the slice (%n and %N) is either the just written slice or the next slice to be read. For example if you create an new archive (either using -c, -C or -+), in -E option, the %n macro is the number of the last slice completed. Else (using -t, -d, -A (with -c or -C), -l or -x), this is the number of the slice that will be required very soon. While

    %c (the context) is substituted by "init", "operation" or "last_slice" in the following conditions:

    • init: when the slice is asked before the catalogue is read
    • operation: once the catalogue is read and/or data treatment has begun.
    • last_slice: when the last slice has been written (archive creation only)

    What the use of this feature? For example you want to burn the brand-new slices on CD as soon as they are available.

    let's build a little script for that:

     %cat burner  #!/bin/bash    if [ "$1" == "" -o "$2" == "" ] ; then   echo "usage: $0 <filename> <number>"   exit 1  fi    mkdir T  mv $1 T  mkisofs -o /tmp/image.iso -r -J -V "archive_$2" T  cdrecord dev=0,0 speed=8 -data /tmp/image.iso  rm /tmp/image.iso  # Now assuming an automount will mount the just newly burnt CD:  if diff /mnt/cdrom/$1 T/$1 ; then   rm -rf T  else   exit 2  endif    %

    This little script, receive the slice filename, and its number as argument, what it does is to burn a CD with it, and compare the resulting CD with the original slice. Upon failure, the script return 2 (or 1 if syntax is not correct on the command-line). Note that this script is only here for illustration, there are many more interesting user scripts made by several dar users. These are available in the examples part of the documentation.

    One could then use it this way:

    -E "./burner %p/%b.%n.dar %n"

    which can lead to the following DAR command-line:

    dar -c ~/tmp/example -z -R / usr/local -s 650M -E "./burner %p/%b.%n.dar %n" -p

    First, note that as our script does not change CD from the device, we need to pause between slices (-p option). The pause take place after the execution of the command (-E option). Thus we could add in the script a command to send a mail or play a music to inform us that the slice is burned. The advantage, here is that we don't have to come twice by slices, once the slice is ready, and once the slice is burnt.

    Another example:

    you want to send a huge file by email. (OK that's better to use FTP, SFTP,... but let's assume we have to workaround a server failure, or an absence of such service). So let's suppose that you only have mail available to transfer your data:

     dar -c toto -s 2M my_huge_file \   -E "uuencode %b.%n.dar %b.%n.dar | mail -s 'slice %n' your@email.address ; rm %b.%n.dar ; sleep 300"

    Here we make an archive with slices of 2 Megabytes, because our mail system does not allow larger emails. We save only one file: "my_huge_file" (but we could even save the whole filesystem it would also work). The command we execute each time a slice is ready is:

    1. uuencode the file and send the output my email to our address.
    2. remove the slice
    3. wait 5 minutes, to no overload too much the mail system, This is also
    4. useful, if you have a small mailbox, from which it takes time to retrieve mail.

    Note that we did not used the %p substitution string, as the slices are saved in the current directory.

    Last example, is while extracting: in the case the slices cannot all be present in the filesystem, you need a script or a command to fetch the next to be requested slice. It could be using ftp, lynx, ssh, etc. I let you do the script as an exercise. :-). Note, if you plan to share your DUC files, thanks to use the convention fo DUC files.

    Before and after saving a file

    This concerns the -=, -< and -> options. The -< (include) and -> (exclude) options, let you define which file will need a command to be run before and after their backup. While the -= option, let you define which command to run for those files.

    Let's suppose you have a very large file changing often that is located in /home/my/big/file, an a running software modifies several files under /home/*/data that need to have a coherent status and are also changing very often.

    Saving them without precaution, will most probably make your big file flagged as "dirty" in dar's archive, which means that the saved status of the file may be a status that never existed for that file: when dar saves a file it reads the first byte, then the second, etc. up to the end of file. While dar is reading the middle of the file, an application may change the very begin and then the very end of that file, but only modified ending of that file will be saved, leading the archive to contain a copy of the file in a state it never had.

    For a set of different files that need coherent status this is even worse, if dar saves one first file while another file is modified at the same time, this will not lead having the currently saved files flagged as "dirty", but may lead the software relying on this set of files to fail when restoring its files because of the incoherent states between them.

    For that situation not to occur, we will use the following options:

    -R / "-<" home/my/big/file "-<" "home/*/data"

    First, you must pay attention to quote around the -< and -> options for the shell not to consider you ask for redirection to stdout or from stdin.

    Back to the example, that says that for the files /home/my/big/file and for any "/home/*/data" directory (or file), a command will be run before and after saving that directory of file. We need thus to define such command to run using the following option:

    -= "/root/scripts/before_after_backup.sh %f %p %c"

    Well as you see, here too we may (and should) use substitutions macro:

    • %% is replaced by a litteral %
    • %p is replaced by the full path (including filename) of the file/directory to be saved
    • %f is replaced by the filename (without path) of the file/directory to be saved
    • %u is the uid of the file's owner
    • %h is the gid of the file's owner
    • %c is replaced by the context, which is either "start" or "end" depending on whether the file/directory is about to be saved or has been completely saved.

    And our script here could look like this:

     %cat /root/scripts/before_after_backup.sh  #!/bin/sh    if [ "$1" == "" ]; then   echo "usage: $0 <filename> <dir+filename> <context>"   exit 1  fi    # for better readability:  filename="$1"  path_file="$2"  context="$3"    if [ "$filename" = "data" ] ; then   if ["$context" = "start" ] ; then   # action to suspend the software using files located in "$2"   else   # action to resume the software using files located in "$2"   fi  else   if ["$path_file" = "/home/my/big/file" ] ; then   if ["$context" = "start" ] ; then   # suspend the application that writes to that file   else   # resume the application that writes to that file   fi   else   # do nothing, or warn that no action is defined for that file   fi  fi

    So now, if we run dar with all these command, dar will execute our script once before entering the data directory located in a home directory of some user, and once all files of that directory will have been saved. It will run our script also before and after saving our /home/my/big/file file.

    If you plan to share your DBP files, thanks to use the DBP convention.

    Convention for DUC files

    Since version 1.2.0 dar's user can have dar calling a command or scripts (called DUC files) between slices, thanks to the -E, -F and -~ options. To be able to easily share your DUC commands or scripts, I propose you the following convention:

    • use the ".duc" extension to show anyone the script/command respect the following

    • must be called from dar with the following arguments:

      example.duc %p %b %n %e %c [other optional arguments]
    • When called without argument, it must provide brief help on what it does and what are the expected arguments. This is the standard "usage:" convention.

      Then, any user, could share their DUC files and don't bother much about how to use them. Moreover it would be easy to chain them, if for example two persons created their own script, one burn.duc which burns a slice onDVD-R(W) and par.duc which makes a Parchive redundancy file from a slice, anybody could use both at a time giving the following argument to dar:

      -E "par.duc %p %b %n %e %c 1" -E "burn.duc %p %b %n %e %c"

      of course a script has not to use all its arguments, in the case of burn.duc for example, the %c (context) is probably useless, and would not be used inside the script, while it is still possible to give it all the "normal" arguments of a DUC file, those not used simply being ignored.

      If you have interesting DUC scripts, you are welcome to contact dar maintainer (and not the maintainer of particular distro) by email, for it be add on the web site and in the following releases For now, check doc/samples directory for a few examples of DUC files.

    • Note that all DUC scripts are expected to return a exit status of zero meaning that the operation has succeeded. If another exit status has been returned, dar asks the user for decision (or aborts if no user has been identified, for example, dar is not ran under a controlling terminal).

    Convention for DBP files

    Same as above, the following convention is proposed to ease the sharing of Dar Backup Preparation files:

    • use the ".dbp" extension to show anyone the script/command respect the following

    • must be called from dar with the following arguments:

      example.dbp %p %f %u %g %c [other optional arguments]
    • when called without argument, it must provide brief help on what it does and what are the expected arguments. This is the standard "usage:" convention.

    • Identically to DUC files, DBP files are expected to return a exist status of zero, else the backup process is suspended for the user to decide wether to retry, ignore the failure or abort the whole backup process.

    User targets in DCF

    Since release 2.4.0, a DCF file (files given to -B option) can contain user targets. A user target is an extention of the conditional syntax. So we will first make a brief review on conditional syntax:

    Conditional syntax in DCF files

    The conditional syntax gives the possiblility to have options in a DCF file that are only active in a certain context:

    • archive extraction (extract:)
    • archive creation (create:)
    • archive listing (list:)
    • archive testing (test:)
    • archive comparison (diff:)
    • archive isolation (isolate:)
    • archive merging (merge:)
    • no action yet defined (default:)
    • all context (all:)
    • when a archive of reference is used (reference:)
    • when an auxilliary archive of reference is used (auxiliary:)

    All option given after the keyword in parenthesis up to the next user target or the end of the file, take effect only in the corresponding context. An example should clarify this:

    %cat sample.dcf # this is a comment all: --min-digits 3 extract: -R / reference: -J aes: auxilliary: -~ aes: create: -K aes: -ac -Z "*.mp3" -Z "*.avi" -zlz4 isolate: -K aes: -zlzo default: -V

    This way, the -Z options are only used when creating an archive, while the --min-digits option is used in any case. Well, this ends the review of the conditional syntax.

    User targets

    As stated previously, user targets feature extends the conditional syntax we just reviewed above. This means new and user defined "targets" can be added. The option that follow them will be activated only if the keyword of the target is passed on command-line or in a DCF file. Let's take an example:

    % cat my_dcf_file.dcf compress: -z lzo:5

    In the default situation all that follows the line "compress:" up to the next target or as here up to the end of the file will be ignored unless the compress keyword is passed on command-line:

    dar -c test -B sample.dcf compress

    Which will do exactly the same as if you have typed:

    dar -c test -z lzo:5

    Of course, you can use as many user target as you wish in your files, the only constraint is that it must not have the name of the reserved keyword of a conditional syntax, but you can also mix conditional syntax and user targets. Here follows a last example:

    % cat sample.dcf # this is a comment all: --min-digits 3 extract: -R / reference: -J aes: auxilliary: -~ aes: create: -K aes: -ac -Z "*.mp3" -Z "*.avi" default: -V # our first user target named "compress": compress: -z lzo:5 # a second user target named "verbose": verbose: -v -vs # a third user target named "ring": ring: -b # a last user target named "hash": hash: --hash sha1

    You can now use dar and activate a set of commands by simply adding the name of the target on command-line:

    dar -c test -B sample.dcf compress ring verbose hash

    which is equivalent to:

    dar -c test --min-digits 3 -K aes: -ac -Z "*.mp3" -Z "*.avi" -z lzo:5 -v -vs -b --hash sha1

    Last for those that like complicated things, you can recusively use DCF inside user targets, which may contain conditional syntax and the same or some other user targets of you own.

    Using data protection with DAR & Parchive

    Parchive (par or par2 in the following) is a very nice program that makes possible to recover a file which has been corrupted. It creates redundancy data stored in a separated file (or set of files), which can be used to repair the original file. This additional data may also be damaged, par will be able to repair the original file as well as the redundancy files, up to a certain point, of course. This point is defined by the percentage of redundancy you defined for a given file. The par reference sites are:

    Since version 2.4.0, dar is provided with a default /etc/darrc file. This one contains a set of user targets among which is par2. This user target is what's over the surface of the par2 integration with dar. It invokes the dar_par.dcf file provided with dar that automatically creates parity file for each slice during backup. When testing an archive it verifies parity data with the archive, and if necessary repaires slices. So now you only need install par2 and use dar this way to activate Parchive integration with dar:

    dar [options] par2

    Simple no?

    Examples of file filtering

    File filtering is what defines which files are saved, listed, restored, compared, tested, considered for merging... In brief, in the following we will speak of which file are elected for the "operation", either a backup, a restoration, an archive contents listing, an archive comparison, etc.

    On dar command-line, file filtering is done using the following options -X, -I, -P, -R, -[, -], -g, --filter-by-ea or --nodump. You have of course all these option using the libdar API.

    OK, Let's start with some concretes examples:

    dar -c toto

    this will backup the current directory and all what is located into it to build the toto archive, also located in the current directory. Usually you should get a warning telling you that you are about to backup the archive itself

    Now let's see something more interesting:

    dar -c toto -R / -g home/ftp

    the -R option tell dar to consider all file under the / root directory, while the -g "home/ftp" argument tells dar to restrict the operation only on the home/ftp subdirectory of the given root directory, which here is /home/ftp.

    But this is a little bit different from the following:

    dar -c toto -R /home/ftp

    here dar will save any file under /home/ftp without any restriction. So what is the difference with the previous form? Both will save just the same files, right, but the file /home/ftp/welcome.msg for example, will be stored as <ROOT>/home/ftp/welcome.msg in the first example while it will be saved as <ROOT>/welcome.msg in the second example. Here <ROOT> is a symbolic representation of the filesystem root, which at restoration or comparison time it will be substitued by the argument given to -R option (which defaults to "."). Let's continue with other filtering mechanism:

    dar -c toto -R / -g home/ftp -P home/ftp/pub

    Same as previously, but the -P option leads all files under the /home/ftp/pub not to be considered for the operation. If -P option is used without -g option all files under the -R root directory except the one pointed to by -P options (can be used several time) are saved.

    dar -c toto -R / -P etc/password -g etc

    here we save all the /etc except the /etc/password file. Arguments given to -P can be plain files also. But when they are directory this exclusion applies to the directory itself and its contents. Note that using -X to exclude "password" does have the same effect:

    dar -c toto -R / -X "password" -g etc

    will save all the /etc directory except any file with name equal to "password". thus of course /etc/password will no be saved, but if it exists, /etc/rc.d/password will not be saved neither if it is not a directory. Yes, if a directory /etc/rc.d/password exist, it will not be affected by the -X option. As well as -I option, -X option do not apply to directories. The reason is to be able to filter some file by type (file extension for example) without excluding a particular directory. For example you want to save all mp3 files and only MP3 files:

    dar -c toto -R / --alter=no-case -I "*.mp3" home/ftp

    will save any ending by mp3 or MP3 (--alter=no-case modify the default behavior and make the mask following it case insensitive, use --alter=case to revert to the default behavior for the following masks). The backup is restricted to /home/ftp directories and subdirectories. If instead -I (or -X) applied to directories, we would only be able to recurse in subdirectories ending by ".mp3" or ".MP3". If you had a directory named "/home/ftp/Music" for example, full of mp3, you would not have been able to save it.

    Note that the glob expressions (where comes the shell-like wild-card '*' '?' and so on), can do much more complicated things like "*.[mM][pP]3". You could thus replace the previous example by the following for the same result:

    dar -c toto -R / -I "*.[mM][pP]3" home/ftp

    And, instead of using glob expression, you can use regular expressions (regex) using the -aregex option. You can also use alternatively both of them using -aglob to return back to glob expressions. Each option -aregex/-aglob modifies the filter option that follow them on command-line or -B included files. This affects -I/-X/-P options for file filtering as well as -u/-U options for Extended Attributes filtering as well as -Z/-Y options for file selected for compression.

    Now the inside algorithm, to understand how -X/-I on one side and -P/-g/-[/-] options act relative to each others: a file is elected for operation if:

    1. its name does not match any -X option or it is a directory
    2. and if some -I is given, file is either a directory or match at least one of the -I option given.
    3. and path and filename do not match any -P option
    4. and if some -g options are given, the path to the file matches at least one of the -g options.

    The algorithm we detailed above is the default one, which is historical and called the unordered method. But since version 2.2.x there is also an more poweful ordered method (activated adding -am option) which gives even more freedom to filters, the dar man mage will give you all the details, but in short it leads the a mask to take precendence on the one found before it on the command-line:

    dar -c toto -R / -am -P home -g home/denis -P home/denis/.ssh

    will save everything except what's in /home but /home/denis will derogate and will be saved except for what's in /home/denis/.ssh. -X and -I acts also similarly between them when -am is used the latest filter met takes precedence (but -P/-g do not interfer with -X/-I).

    To summarize, in parallel of file filtering, you will find Extended Attributes filtering thanks to the -u and -U options (they work the same as -X and -I option but apply to EA), you will also find the file compression filtering (-Z and -Y options) that defines which file to compress or to not compress, here too the way they work is the same as seen with -X and -I options. The -ano-case and -acase options do also apply to all, as well as the -am option. Last all these filtering (file, EA, compression) can also use regular expression in place of glob expression (thanks to the -ag / -ar options).

    Decremental Backup

    Introduction

    Well, you have already heard about "Full" backup, in which all files are completely saved in such a way that let you use this backup alone to completely restore your data. You have also probably heard about "differential" backup in which is only stored the changes that occurred since an archive of reference was made. There is also the "incremental" backup, which in substance, is the same as "differential" ones. The difference resides in the nature of the archive of reference: "Differential" backup use only a "full" backup as reference, while "incremental" may use a "full" backup, a "differential" backup or another "incremental" backup as reference (Well, in dar's documentation the term "differential" is commonly used in place of "incremental", since there is no conceptual difference from the point of view of dar software).

    let's now see a new type of backup: the "decremental" backup. All started by a feature request from Yuraukar on dar-support mailing-list:

    In the full/differential backup scheme, for a given file, you have as many versions as changes that were detected from backup to backup. That's fair in terms of storage space required, as you do not store twice the same file in the same state, which you would do if you were doing only full backups. But the drawback is that you do not know by advance in which backup to find the latest version of a given file. Another drawback comes when you want to restore your entire system to the latest state available from your backup set, you need to restore the most ancient backup (the latest full backup), then the others one by one in chronological order (the incremental/differential backups). This may take some time, yes. This is moreover inefficient, because, you will restore N old revisions of a file that have changed often before restoring the last and more recent version.

    Yuraukar idea was to have all latest versions of files in the latest backup done. Thus the most recent archive would always stay a full backup. But, to still be able to restore a file in an older state than the most recent (in case of accidental suppression), we need a so called decremental backup. This backup's archive of reference is in the future (a more recent decremental backup or the latest backup done, which is a full backup in this scheme). This so called "decremental" backup stores all the file differences from this archive of reference that let you get from the reference state to an older state.

    Assuming this is most probable to restore the latest version of a filesystem than any older state available, decremental backup seem an interesting alternative to incremental backups, as in that case you only have to use one archive (the latest) and each file get restored only once (old data do not get overwritten at each archive restoration as it is the case with incremental restoration).

    Let's take an example: We have 4 files in the system named f1, f2, f3 and f4. We make backups at four different times t1, t2, t3 and t4 in chronological order. We will also perform some changes in filesystem along this period: f1 has will be removed from the system between t3 and t4, while f4 will only appear between t3 and t4. f2 will be modified between t2 and t3 while f3 will be changed between t3 and t4.

    All this can be represented this way, where lines are the state at a given date while each column represents a given file.

     time   ^   | * represents the version 1 of a file  t4 + # # * # represents the version 2 of a file   |  t3 + * # *   |  t2 + * * *   |  t1 + * * *   |   +----+----+----+----+---   f1 f2 f3 f4

    Now we will represent the contents of backups at these different times, first using only full backup, then using incremental backups and at last using decremental backups. We will use the symbol 'O' in place of data if a given file's data is not stored in the archive because it has not changed since the archive of reference was made. We will also use an 'x' to represent the information that a given file has been recorded in an archive as deleted since the archive of reference was made. This information is used at restoration time to remove a file from filesystem to be able to get the exact state of files seen at the date the backup was made.

    Full backups behavior

      ^   |  t4 + # # *   |  t3 + * # *   |  t2 + * * *   |  t1 + * * *   |   +----+----+----+----+---   f1 f2 f3 f4

    Yes, this is easy, each backup contains all the files that existed at the time the backup was made. To restore in the state the system had at a given date, we only use one backup, which is the one that best corresponds to the date we want. The drawback is that we saved three time the file f1 an f3 version 1, and twice f2 version 2, which correspond to a waste of storage space.

    Full/Incremental backups behavior

      ^   |  t4 + x 0 # * 0 represents a file which only state is recorded   | as such, no data is stored in the archive  t3 + 0 # 0 very little space is consummed by such entry   |  t2 + 0 0 0 x represents an entry telling that the corresponding   | file has to be removed  t1 + * * *   |   +----+----+----+----+---   f1 f2 f3 f4

    Now we see that archive done at date 't2' does not contain any data as no changed have been detected between t1 and t2. This backup is quite small and needs only little storage. Archive at t3 date only stores f2's new version, and at t4 the archive stores f4 new file and f3's new version. We also see that f1 is marked as removed from filesystem since date t3 as it no longer existing in filesystem but existed in the archive of reference done at t3.

    As you see, restoring to the latest state is more complicated compared to only using full backups, it is neither simple to know in which backup to took for a given file's data at date t3 for example, but yes, we do not waste storage space anymore. The restoration process the user has to follow is to restore in turn:

    • archive done at t1, which will put old version of files and restore f1 that have been removed at t4
    • archive done at t2, that will do nothing at all
    • archive done at t3, that will replace f2's old version by its new one
    • archive done at t4, that will remove f1, add f4 and replace f3's old version to by its latest version.

    The latest version of files is scattered over the two last archives here, but in common systems, much of the data does not change at all and can only be found in the first backup (the full backup).

    Decremental backup behavior

    Here is represented the contents of backups using decremental approach. The most recent (t4) backup is always a full backup. Older backups are decremental backups based on the just more recent one (t3 is a difference based on t4, t1 is a difference based on t2). At the opposit of incremental backups, the reference of the archive is in the future not in the past.

      ^   |  t4 + # # *   |  t3 + * 0 * x   |  t2 + 0 * 0   |  t1 + 0 0 0   |   +----+----+----+----+---   f1 f2 f3 f4

    Thus obtaining the latest version of the system is as easy as done using only full backups. And you also see that the space required to store these decremental backup is equivalent to what is needed to store the incremental backups. However, still the problem exist to locate the archive in which to find a given's file data at a given date. But also, you may see that backup done at time t1 can safely be removed as it became useless because it does not store any data, and loosing archive done at t1 and t2 is not a big problem, you just loose old state data.

    Now if we want to restore the filesystem in the state it has at time t3, we have to restore archive done at t4 then restore archive done at t3. This last step will have the consequences to create f1, replace f3 by its older version and delete f4 which did not exist at time t3 (file which is maked 'x' meaning that it has to be removed). if we want to go further in the past, we will restore the decremental backup t2 which will only replace f2's new version by the older version 1. Last restoring t1 will have no effect as no changed were made between t1 and t2.

    What about dar_manager? Well, in nature, there is no difference between an decremental backup and a differential/incremental backup. The only difference resided in the way (the order) they have to be used. So, even if you can add decremental backups in a dar_manager database, it is not designed to handle them correctly. It is thus better to keep dar_manager only for incremental/differential/full backups.

    Decremental backup theory

    But how to get built decremental backup as the reference is in the future and does not exist yet?

    Assuming you have a full backup describing your system at date t1, can we have in one shot both the new full backup for time t2 and also transform the full backup of time t1 into a decremental backup relative to time t2? In theory, yes. But there is a risk in case of failure (filesystem full, lack of electric power, bug, ...): you may loose both backups, the one which was under construction as well as the one we took as reference and which was under process of transformaton to decremental backup.

    Seen this, the libdar implementation is to let the user do a normal full backup at each step [Doing just a differential backup sounds better at first, but this would end in more archive manipulation, as we would have to generate both decremental and new full backup, and we would manipulate at least the same amount of data]. Then with the two full backups the user would have to use archive merging to create the decremental backup using -ad option. Last, once the resulting (decremental) archive have been tested and that the user is sure this decremental backup is viable, he can remove the older full backup and store the new decremental backup beside older ones and the new full backup. This at last only, will save you disk space and let you easily recover you system using the latest (full) backup.

    Can one use an extracted catalogue instead of the old full backup to perform a decremental backup? No. The full backup to transform must have the whole data in it to be able to create a decremental back with data in it. Only the new full backup can be replaced by its extracted catalogue.

    This last part about decremental backup is extracted from a discussion with Dan Masson on dar-support mailing-list:

    Decremental backup practice

    We start by a full backup:

    dar -c /mnt/backup/FULL-2015-04-10 -R / -z -g /mnt/backup -D

    Then at each new cycle, we a new full backup

    dar -c /mnt/backup/FULL-2015-04-11 -R / -z -g /mnt/backup -D

    Then to save space, we reduce into a decremental backup the previous full backup:

    dar -+ /mnt/backup/DECR-2015-04-10 -A /mnt/backup/FULL-2015-04-10 -@ /mnt/backup/FULL-2015-04-11 -ad -ak

    By precaution test that the decremental archive is viable

    dar -t /mnt/backup/DECR-2015-04-10

    Then make space by removing the old full backup:

    rm /mnt/backup/FULL-2015-04-10.*.dar

    And you can loop this way forever, removing at at time the very oldest decremental backups if space is missing.

    Assuming you run this cycle each day, you get the following at each new step/day:

     The 2015-04-10 you have:   FULL-2015-04-10    The 2015-04-11 you have:   FULL-2015-04-11   DECR-2015-04-10    The 2015-04-12 you have:   FULL-2015-04-12   DECR-2015-04-11   DECR-2015-04-10    The 2015-04-13 you have:   FULL-2015-04-13   DECR-2015-04-12   DECR-2015-04-11   DECR-2015-04-10

    and so on.

    Restoration using decremental backup

    Scenario 1: today 2015-04-17 you have lost your system, you want to restore it as it was at the time of the last backup. Solution:: use the last backup it is a full one, it is the latest backup, nothing more!

    dar -x /mnt/backup/FULL-2015-04-16 -R /

    Scenario 2: today 2015-04-17 you have lost your system due to a virus or your system had been compromised and you know it started the 2015-04-12 so you want to restore your system at the time of 2015-04-11. First, restore the last full archive (FULL-2015-04-16) then in reverse order all the decremental ones: DECR-2015-04-15 then DECR-2015-04-14, then DECR-2015-04-13, then DECR-2015-04-12 then DECR-2015-04-11. The decremental backup are small, their restoration is usually quick (depending on how much files changed in the day). So here we get in the exact same situation you would have reach restoring only FULL-2015-04-11, but you did not not have to store all the full backups, just the latest.

    dar -x /mnt/backup/FULL-2015-04-16 -R / dar -x /mnt/backup/DECR-2015-04-15 -R / -w dar -x /mnt/backup/DECR-2015-04-14 -R / -w dar -x /mnt/backup/DECR-2015-04-13 -R / -w dar -x /mnt/backup/DECR-2015-04-12 -R / -w dar -x /mnt/backup/DECR-2015-04-11 -R / -w

    Door inodes (Solaris)

    A door inode is a dynamic object that is created on top of an empty file, it does exist only when a process has a reference to it, it is thus not possible to restore it. But the empty file it is mounted on can be restored instead. As such, dar restores an door inode with an empty file having the same parameters as the door inode.

    If an door inode is hard linked several times in the file system dar will restore a plain file having as much hard links to the corresponding locations.

    Dar is also able to handle Extended Attributes associated to a door file, if any. Last, if you list an archive containing door inodes, you will see the 'D' letter as their type (by opposition to 'd' for directories), this is conform to what the 'ls' command displays for such entries.

    How to use binary delta with dar

    Terminology

    delta compression, binary diff or rsync increment all point to the same feature: a way to avoid resaving a whole file during a differential/incremental backup but only save the modified part of it instead. This solution is of course interesting for large files that change often but only for little parts of them (Microsoft exchange mailboxes, for example). Dar implements this feature relying on librsync library, feature which we will call binary delta in the following.

    Librsync specific concepts

    Before looking at the way to use dar, several concepts from librsync have to be understood:

    In order to make a binary delta of a file foo which at time t1 contained data F1 and at time t2 containted data F2, librsync requires first that a delta signature be made against F1.

    Then using that delta signature and data F2, librsync is able to build a delta patch P1 that, if applied to F1 will provide content F2:

     backing up file "foo"   |   V  time t1 content = F1 ---------> delta signature of F1   | |   | |   | +-------------> ) building delta patch "P1"   V )----> containing the difference  time t2 content = F2 ----------------------------> ) from F1 to F2   |   ...

    At restoration time dar has then first to restore F1, from a full backup or from a previous differential backup, then using librsync applying the patch "P1" to modify F1 into F2.

     restoring file "foo"   |   V  time t3 content = F1 <--- from a previous backup   |   +------>--------------->----------------+   . |   . V   . + <----- applying patch "P1"   . |   +-----<---------------<-------------<---+   |   V  time t4 content = F2

    Using binary delta with dar

    First, delta signature is not activated by default, you have to tell dar you want to generate delta signature using the --delta sig option at archive creation/isolation/merging time. Then as soon as a file has a delta signature in the archive of reference, dar will perform a delta binary and store a delta patch if such file has changed since the archive of reference was done. But better an example than a long explanation:

    Making differential backup

    First, doing a full backup, we add the --delta sig option for the resulting archive to contain the necessary signatures to be provided to librsync later on in order to setup delta patches. This has the drawback of additional space requirement but the advantage of space economy at incremental/differential backups:

    dar -c full -R / -z --delta sig ...other options...

    Then there is nothing more specific to delta signature, this is the same way as you were used to do with previous releases of dar: you just need to rely on a archive of reference containing delta signatures for dar activating delta binary. Here below, diff1 archive will eventually contain delta patches of modified files since full archive was created, but will not contain any delta signature.

    dar -c diff1 -A full -R / -z ...other options...

    The next differential backups will be done the same, based on the full backup:

    dar -c diff2 -A full -R / -z ...other options...

    Looking at archive content, you will see the "[Delta]" flag in place of the "[Saved]" flag for files that have been saved as a delta patch:

    [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename
    -------------------------------+------------+------+-------+------+------+-------------- [Delta][ ] [-L-][ 99%][X] -rwxr-xr-x 1000 1000 919 kio Tue Mar 22 20:22:34 2016 bash

    Making incremental backup

    Doing incremental backups, the first one is always a full backup and is done the same as above for differential backup:

    dar -c full -R / -z --delta sig ...other options...

    But at the opposit of differential backups, incremental backups are also used as reference for the next backup. Thus if you want to continue performing binary delta, some delta signatures must be present beside the delta patch in the resulting archives:

    dar -c incr1 -A full -R / -z --delta sig ...other options...

    Here the --delta sig switch leads dar to copy from the full backup into the new backup all the delta signatures of unchanged files and to recompute new delta signature of files that have changed, in addition to the delta patch calculation that are done with or without this option.

    Making isolated catalogue

    Delta binary still allows differential or incremental backup using a isolated catalogue in place of the original backup of reference. The point to take care about if you want to perform binary delta is the way to build this isolated catalogue: the delta signature present in the backup of reference files must be copied to the isolated catalogue, else the differential or incremental backup will be a normal one (= without binary delta):

    dar -C CAT_full -A full -z --delta sig ...other options...

    Note that if the archive of reference does not hold any delta signature, the previous command will lead dar to compute on-fly delta signature of saved files while performing catalogue isolation. You can thus chose not to include delta signature inside full backup while still being able to let dar use binary delta. However as dar cannot compute delta signature without data, files that have been recorded as unchanged since the archive of reference was made cannot have their delta signature computed at isolation time. Same point if a file is stored as a delta patch without delta signature associated with it, dar will not be able to add a delta signature at isolation time for that file.

    Yes, this is as simple as adding --delta sig to what you were used to do before. The resulting isolated catalogue will be much larger than without delta signatures but still much smaller than the full backup itself. The incremental or differential backup can then be done the same as before but using CAT_full in place of full:

    dar -c diff1 -A CAT_full -R / -z ...other options...

    or

    dar -c incr1 -A CAT_full -R / -z --delta sig ...other options...

    Merging archives

    You may need to merge two backups or make a subset of a single backup or even a mix of these two operations, possibility which is brought by the --merge option for a long time now. Here too if you want to keep the delta signatures that could be present in the source archives you will have to use --delta sig option:

    dar --merge merged_backup -A archive1 -@archive2 -z --delta sig ...other options...

    Restoring with binary delta

    No special option has to be provided at restoration time. Dar will figure out by itself whether the data stored in backup for a file is a plain data and can restore the whole file or is a delta patch that has to be applied to the existing file lying on filesystem. Before patching the file dar will calculate and check its CRC. if the CRC is the expected one, the file will be patched else a warning will be issued and the file will not be modified at all.

    The point with restoration is to *always* restore all previous backups in order, from the full backup down to all incremental one (or the full backup and just the latest differential one), for dar be able to apply the stored patches. Else restoration can fail for some or all files. Dar_manager can be of great help here as it will know which archive to skip and which not to skip in order to restore a particular set of files. But for a restoration a whole filesystem it is advised to just use dar and restore backup in order.

    Performing binary delta only for some files

    You can exclude some files from delta difference operation by avoiding creating a delta signature for them in the archive of reference, using the option --exclude-delta-sig. You can also include only some files for delta signatures using the --include-delta-sig option. Of course as with other masks-related options like -I, -X, -U, -u, -Z, -Y, ... it is possible to combine them to have an even greater and more accurate definition of files for which you want to have delta signature being built

     dar -c full -R / -z --delta sig \   --include-delta-sig "*.opt" \   --include-delta-sig "*.pst" \   --exclude-delta-sig "home/joe/*"

    Independently from this filtering mechanism based on path+filename, delta signature is never calculated for files smaller than 10 kio because it does not worse performing delta difference for them. You can change that behavior using the option --delta-sig-min-size <size in byte>

    dar -c full -R / -z --delta sig --delta-sig-min-size 20k

    Archive listing

    Archive listing received adhoc addition to show which file have delta signature and which one have been saved as delta patch. The [Data ] column shows [Delta] in place of [Saved] when a delta patch is used, and a new column entitled [D] shows [D] when a delta signature is present for that file and [ ] else (or [-] if delta signature is not applicable to that type of file).

    See man page about --delta related options for even more details.

    Differences between rsync and dar

    rsync uses binary delta to reduce the volume of data over the network to synchronize a directory between two different hosts. The resulting data is stored uncompressed but thus ready for use

    dar uses binary delta to reduce the volume of data to store (and thus transfer over the network) when performing a differential or incremental backup. At the opposite of rsync the data stays compressed and it thus not ready for use (backup/archiving context), and the binary delta can be used incrementally to record a long history of modification, while rsync looses past modifications at each new remote synchronization

    In conclusion rsync and dar to not address the same purposes. For more about that topic check the benchmark

    Multi recipient signed archive weakness

    As described in the usage notes it is possible to encrypt an archive and have it readable by several recipients using their respective gnupg private key. So far, so good!

    It is also possible to embed your gnupg signature within such archive for your recipient to have a proof the archive comes from you. If there is only a single recipient, So far, still so good!

    But when an archive is encrypted using gpg to different recipient and is also signed, there is a known weakness. If one of the recipient is an expert he/she could reuse your signature for a slightly different archive

    Well, if this type of attack should be accessible by an expert guy with some constraints, it can only take place between a set of friends or at least people that know each other enough to have exchanged their public key information between them.

    In that context, if you do think the risk is more than theorical and the consequences of such exploit would be important, it is advised to sign the dar archive outside dar, you can still keep encryption with multi-recipients withing dar.

    dar -c my_secret_group_stuff -z -K gnupg:recipents1@group.group,recipient2@group.group -R /home/secret --hash sha512 # check the archive has not been corrupted sha512sum -c my_secret_group_stuff.1.dar.sha512 #sign the hash file (it will be faster than signing the backup # in particular if this one is huge gpg --sign -b my_secret_group_stuff.1.dar.sha512 #send all three files to your recipients: my_secret_group_stuff.1.dar my_secret_group_stuff.1.dar.sha512 my_secret_group_stuff.1.dar.sha512.sig
    dar-2.7.17/doc/benchmark_logs.html0000644000175000017520000066070714403564520013735 00000000000000 Benchmarking backup tools
    DAR's Documentation

    Benchmarking backup tools

    Test logs

    Introduction

    This document keep trace of the tests performed that lead to the summarized results presented in the benchmark document. Several helper scripts detailed at the end of this document have been necessary to obtain these results.

    Testing Context

    Software under test

    • dar version 2.7.0
    • rsync version 3.1.3
    • tar GNU tar 1.30 under Linux and GNU tar (gdar) 1.32 under FreeBSD

    Hardware used for testing

    Performance tests has been performed on an HPE Proliant server (a ProLiant XL230a Gen9 running two Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz processors) running a Devuan beowulf Linux system. Other tests have been run from Virtual Machines (FreeBSD 12.1, Devuan 3.0.0) of a Proxmox hypervisor running one on an Intel Core i5-7400 (3 GHz) based computer.

    Test logs

    Completness of backup and restoration

    Dar

    root@terre:/mnt/localdisk/Benchmark_tools# ./build_test_tree.bash SRC 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395381 s, 265 MB/s 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621889 s, 169 MB/s 1+0 records in 1+0 records out 1 byte copied, 0.000386102 s, 2.6 kB/s root@terre:/mnt/localdisk/Benchmark_tools# dar -c backup -R SRC -------------------------------------------- 14 inode(s) saved including 3 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 14 -------------------------------------------- EA saved for 1 inode(s) FSA saved for 5 inode(s) -------------------------------------------- root@terre:/mnt/localdisk/Benchmark_tools# mkdir DST root@terre:/mnt/localdisk/Benchmark_tools# dar -x backup -R DST -------------------------------------------- 14 inode(s) restored including 3 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 14 -------------------------------------------- EA restored for 1 inode(s) FSA restored for 1 inode(s) -------------------------------------------- root@terre:/mnt/localdisk/Benchmark_tools#

    We simply performed backup of SRC directory with dar's default options, then restore this backup into the DST directory, let's now compare SRC and DST contents:

    root@terre:/mnt/localdisk# du -s SRC DST 2068 SRC 1048 DST root@terre:/mnt/localdisk#

    The space used by DST is less than the space used by SRC! At first we could beleive that not all data could be restored, let's looking for the explanation:

    root@terre:/mnt/localdisk#ls -iRl SRC DST DST: total 1044 414844 drwxr-xr-x 2 root root 4096 Oct 28 11:09 SUB 414850 drwxr-xr-x 2 root root 4096 Oct 22 11:09 dev 414848 brw-r--r-- 1 root root 2, 1 Oct 28 11:09 fd1 414842 crw-r--r-- 1 root root 3, 1 Oct 28 11:09 null 414841 prw-r--r-- 2 root root 0 Oct 28 11:09 pipe 414840 -rw-rwxr--+ 1 root root 1048576 Oct 28 11:09 plain_zeroed 414849 -rw-r--r-- 1 nobody root 1048582 Oct 28 11:09 random 414843 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 sparse_file DST/SUB: total 4 414841 prw-r--r-- 2 root root 0 Oct 28 11:09 hard_linked_pipe 414846 srw-rw-rw- 2 root root 0 Oct 12 23:00 hard_linked_socket 414843 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 hard_linked_sparse_file 414845 lrwxrwxrwx 1 root root 6 Oct 28 11:09 symlink-broken -> random 414847 lrwxrwxrwx 1 bin daemon 9 Oct 28 11:09 symlink-valid -> ../random DST/dev: total 0 414846 srw-rw-rw- 2 root root 0 Oct 12 23:00 log SRC: total 2064 411386 drwxr-xr-x 2 root root 4096 Oct 28 11:09 SUB 414836 drwxr-xr-x 2 root root 4096 Oct 22 11:09 dev 414835 brw-r--r-- 1 root root 2, 1 Oct 28 11:09 fd1 414834 crw-r--r-- 1 root root 3, 1 Oct 28 11:09 null 414832 prw-r--r-- 2 root root 0 Oct 28 11:09 pipe 414826 -rw-rwxr--+ 1 root root 1048576 Oct 28 11:09 plain_zeroed 414827 -rw-r--r-- 1 nobody root 1048582 Oct 28 11:09 random 414828 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 sparse_file SRC/SUB: total 4 414832 prw-r--r-- 2 root root 0 Oct 28 11:09 hard_linked_pipe 414837 srw-rw-rw- 2 root root 0 Oct 12 23:00 hard_linked_socket 414828 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 hard_linked_sparse_file 414830 lrwxrwxrwx 1 root root 6 Oct 28 11:09 symlink-broken -> random 414831 lrwxrwxrwx 1 bin daemon 9 Oct 28 11:09 symlink-valid -> ../random SRC/dev: total 0 414837 srw-rw-rw- 2 root root 0 Oct 12 23:00 log root@terre:/mnt/localdisk#

    All files are present in DST and use the expected space usage, as reported by the ls command. We can also see that the hard linked inode were properly restored for plain file, named pipe and unix socket: the inode number in first column is the same (see colorized output above).

    Maybe something is missing elsewhere?

    root@terre:/mnt/localdisk# getfacl SRC/plain_zeroed DST/plain_zeroed # file: SRC/plain_zeroed # owner: root # group: root user::rw- user:nobody:rwx group::r-- mask::rwx other::r-- # file: DST/plain_zeroed # owner: root # group: root user::rw- user:nobody:rwx group::r-- mask::rwx other::r-- root@terre:/mnt/localdisk# getfattr -d SRC/plain_zeroed DST/plain_zeroed # file: SRC/plain_zeroed user.hello="hello world!!!" # file: DST/plain_zeroed user.hello="hello world!!!" root@terre:/mnt/localdisk/Benchmark_tools# lsattr SRC/plain_zeroed DST/plain_zeroed s---i-d-------e---- SRC/plain_zeroed s---i-d-------e---- DST/plain_zeroed root@terre:/mnt/localdisk/Benchmark_tools#

    To summarize:

    • user and group ownership are restored
    • permission are set correctly
    • ACL are properly restored
    • Extended Attributes also
    • file system specific attributes are too
    • hard links are restored

    So what? Let's rerun du file by file:

    root@terre:/mnt/localdisk/Benchmark_tools# du -B1 SRC/* DST/* 8192 SRC/SUB 4096 SRC/dev 0 SRC/fd1 0 SRC/null 1048576 SRC/plain_zeroed 1052672 SRC/random 8192 DST/SUB 4096 DST/dev 0 DST/fd1 0 DST/null 4096 DST/plain_zeroed 1052672 DST/random root@terre:/mnt/localdisk/Benchmark_tools# ls -l SRC/plain_zeroed DST/plain_zeroed -rw-rwxr--+ 1 root root 1048576 Oct 21 18:40 DST/plain_zeroed -rw-rwxr--+ 1 root root 1048576 Oct 21 18:40 SRC/plain_zeroed root@terre:/mnt/localdisk/Benchmark_tools#

    OK here is the explanation: plain_zeroed file was using 1048576 bytes of disk space in SRC and consumes only 4096 bytes in DST, but it has its file size is still officially 1048576, it has thus become now a sparse file (not all zeroed bytes are stored).

    root@terre:/mnt/localdisk/Benchmark_tools# diff -s SRC/plain_zeroed DST/plain_zeroed Files SRC/plain_zeroed and DST/plain_zeroed are identical root@terre:/mnt/localdisk/Benchmark_tools#

    But nothing changes from the user point of view, the restoration process with dar just optimized the space usage.

    Let's continue checking the inode dates. As you know, Unix inode have several dates:

    • atime, the access time, gives the last time the file's data has been accessed (read)
    • mtime, the modification time, gives the last time the file's data has been modified (wrote)
    • ctime, the change time, gives the last time the file's metadata in other word the inode properties (ownership, ACL, permissions, dates, ...) have been modified
    • btime, the birth time or yet creation time, gives the time the file has been created on the current filesystem, this date is not present on all Unix system

    The ls -iRl command we used so far does only show the mtime date moreover with a time accuracy of only one minute, while modern systems provide nanosecond precision. For that reason we will use the stat command instead to have all dates at the system time accuracy:

    root@terre:/mnt/localdisk/Benchmark_tools# stat SRC/random DST/random File: SRC/random Size: 1048576 Blocks: 2048 IO Block: 4096 regular file Device: 802h/2050d Inode: 414840 Links: 1 Access: (0644/-rw-r--r--) Uid: (65534/ nobody) Gid: ( 0/ root) Access: 2020-10-22 12:13:01.813319506 +0200 Modify: 2020-10-22 12:12:57.765328555 +0200 Change: 2020-10-22 12:12:59.805323991 +0200 Birth: - File: DST/random Size: 1048576 Blocks: 2048 IO Block: 4096 regular file Device: 802h/2050d Inode: 414889 Links: 1 Access: (0644/-rw-r--r--) Uid: (65534/ nobody) Gid: ( 0/ root) Access: 2020-10-22 12:13:01.813319506 +0200 Modify: 2020-10-22 12:12:57.765328555 +0200 Change: 2020-10-22 12:14:34.877131738 +0200 Birth: - root@terre:/mnt/localdisk/Benchmark_tools#

    From the above output we see that:

    • atime is restored
    • mtime is restored
    • ctime is not restored

    As we targeted this benchmark mainly for Linux which has not yet the btime available (Well some Linux file systems support btime but its access is not yet fully available to applications), we will thus momentarily change to a BSD system to play with btime. BSD systems include MACOS X, FreeBSD, NetBSD, butterflyBSD,... we will use FreeBSD here. Under FreeBSD, the stat command is not as easy to read as under Linux, however it is very flexible which we will leverage to mimic the Linux output:

    root@FreeBSD:~denis # which mystat mystat: aliased to stat -f "%N%nAccess: %Sa%nModify: %Sm%nChange: %Sc%nBirth: %SB%n" !* root@FreeBSD:~denis # mystat SRC/random SRC/random Access: Oct 27 13:28:41 2020 Modify: Oct 22 15:34:07 2020 Change: Oct 22 15:34:09 2020 Birth: Oct 22 15:34:07 2020 root@FreeBSD:~denis # dar -c backup -R SRC -q root@FreeBSD:~denis # mkdir DST root@FreeBSD:~denis # dar -x backup -R DST -q root@FreeBSD:~denis # mystat DST/random DST/random Access: Oct 27 13:28:41 2020 Modify: Oct 22 15:34:07 2020 Change: Oct 27 13:31:50 2020 Birth: Oct 22 15:34:07 2020 root@FreeBSD:~denis #

    In conclusion dar also saves and restores btime properly.

    Rsync

    Let's do the same we did previously using rsync. We start by copying SRC directory to DST:

    root@terre:/mnt/localdisk/Benchmark_tools# chattr -i DST/plain_zeroed root@terre:/mnt/localdisk/Benchmark_tools# rm -rf DST root@terre:/mnt/localdisk/Benchmark_tools# mkdir DST root@terre:/mnt/localdisk/Benchmark_tools# rsync -arvHAXS SRC/* DST sending incremental file list created directory DST fd1 null pipe plain_zeroed random SUB/ SUB/hard_linked_pipe => pipe SUB/hard_linked_socket SUB/hard_linked_sparse_file SUB/symlink-broken -> random SUB/symlink-valid -> ../random dev/ dev/log => SUB/hard_linked_socket sparse_file => SUB/hard_linked_sparse_file sent 12,340,852 bytes received 198 bytes 24,682,100.00 bytes/sec total size is 22,577,173 speedup is 1.83 root@terre:/mnt/localdisk/Benchmark_tools#

    First note, the backup and restoration is done in one step, where dar was decorelating the backup operation from the restoration operation. The resulting backup needs not software to be restored (DST is a copy of SRC). For dar to reach the same result (without using storage for the backup) this implies two dar commands: dar -c - -R SRC | dar -x - --sequential-read -R DST. The situation is similar with tar, you need two commands to perform the same task: tar -cf - | tar -xf -

    root@terre:/mnt/localdisk/Benchmark_tools# du -s SRC DST 2056 SRC 1028 DST root@terre:/mnt/localdisk/Benchmark_tools#

    Here too, the restored data uses less space than the original data, sparse file have been taken into account (need specifying -S option) and space optimization of non sparse file is performed.

    root@terre:/mnt/localdisk/Benchmark_tools# ls -iRl SRC DST DST: total 12060 414843 drwxr-xr-x 2 root root 4096 Oct 28 11:09 SUB 414844 drwxr-xr-x 2 root root 4096 Oct 22 11:09 dev 414840 brw-r--r-- 1 root root 2, 1 Oct 28 11:09 fd1 414841 crw-r--r-- 1 root root 3, 1 Oct 28 11:09 null 414842 prw-r--r-- 2 root root 0 Oct 28 11:09 pipe 414848 -rw-rwxr--+ 1 root root 1048576 Oct 28 11:09 plain_zeroed 414849 -rw-r--r-- 1 nobody root 1048582 Oct 28 11:09 random 414850 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 sparse_file DST/SUB: total 10000 414842 prw-r--r-- 2 root root 0 Oct 28 11:09 hard_linked_pipe 414845 srw-rw-rw- 2 root root 0 Oct 12 23:00 hard_linked_socket 414850 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 hard_linked_sparse_file 414846 lrwxrwxrwx 1 root root 6 Oct 28 11:09 symlink-broken -> random 414847 lrwxrwxrwx 1 bin daemon 9 Oct 28 11:09 symlink-valid -> ../random DST/dev: total 0 414845 srw-rw-rw- 2 root root 0 Oct 12 23:00 log SRC: total 2064 411386 drwxr-xr-x 2 root root 4096 Oct 28 11:09 SUB 414836 drwxr-xr-x 2 root root 4096 Oct 22 11:09 dev 414835 brw-r--r-- 1 root root 2, 1 Oct 28 11:09 fd1 414834 crw-r--r-- 1 root root 3, 1 Oct 28 11:09 null 414832 prw-r--r-- 2 root root 0 Oct 28 11:09 pipe 414826 -rw-rwxr--+ 1 root root 1048576 Oct 28 11:09 plain_zeroed 414827 -rw-r--r-- 1 nobody root 1048582 Oct 28 11:09 random 414828 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 sparse_file SRC/SUB: total 4 414832 prw-r--r-- 2 root root 0 Oct 28 11:09 hard_linked_pipe 414837 srw-rw-rw- 2 root root 0 Oct 12 23:00 hard_linked_socket 414828 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 hard_linked_sparse_file 414830 lrwxrwxrwx 1 root root 6 Oct 28 11:09 symlink-broken -> random 414831 lrwxrwxrwx 1 bin daemon 9 Oct 28 11:09 symlink-valid -> ../random SRC/dev: total 0 414837 srw-rw-rw- 2 root root 0 Oct 12 23:00 log root@terre:/mnt/localdisk/Benchmark_tools#

    All files are present in DST and use the expected space usage, as reported by the ls. We can also see that all three hard linked inode (plain file, socket and named pipe) are restored properly. So we can suspect the cause of the size difference to be linked with sparse files:

    Let's now check file's metadata:

    root@terre:/mnt/localdisk/Benchmark_tools# getfacl SRC/plain_zeroed DST/plain_zeroed # file: SRC/plain_zeroed # owner: root # group: root user::rw- user:nobody:rwx group::r-- mask::rwx other::r-- # file: DST/plain_zeroed # owner: root # group: root user::rw- user:nobody:rwx group::r-- group::rwx other::r-- root@terre:/mnt/localdisk/Benchmark_tools# getfattr -d SRC/plain_zeroed DST/plain_zeroed # file: SRC/plain_zeroed user.hello="hello world!!!" # file: DST/plain_zeroed user.hello="hello world!!!"
    root@terre:/mnt/localdisk/Benchmark_tools# lsattr SRC/plain_zeroed DST/plain_zeroed s---i-d-------e---- SRC/plain_zeroed --------------e---- DST/plain_zeroed root@terre:/mnt/localdisk/Benchmark_tools#stat SRC/random DST/random File: SRC/random Size: 1048582 Blocks: 2056 IO Block: 4096 regular file Device: 802h/2050d Inode: 414827 Links: 1 Access: (0644/-rw-r--r--) Uid: (65534/ nobody) Gid: ( 0/ root) Access: 2020-10-28 11:09:59.977926733 +0100 Modify: 2020-10-28 11:09:57.973931318 +0100 Change: 2020-10-28 11:09:57.973931318 +0100 Birth: - File: DST/random Size: 1048582 Blocks: 2056 IO Block: 4096 regular file Device: 802h/2050d Inode: 414849 Links: 1 Access: (0644/-rw-r--r--) Uid: (65534/ nobody) Gid: ( 0/ root) Access: 2020-10-28 12:07:53.622841733 +0100 Modify: 2020-10-28 11:09:57.973931318 +0100 Change: 2020-10-28 12:07:53.622841733 +0100 Birth: - root@terre:/mnt/localdisk/Benchmark_tools#

    So in summary:

    • Permission are restored,
    • user and group ownership are restored too,
    • mtime is restored,
    • File ACL are restored,
    • Extended Attributes are restored

    But

    • filesystem specific attributes are not restored,
    • atime is not restored,
    • ctime is not restored

    For btime as we did before, let's test under a FreeBSD system:

    root@FreeBSD:~denis # rm -rf DST root@FreeBSD:/home/denis # which mystat mystat: aliased to stat -f "%N%nAccess: %Sa%nModify: %Sm%nChange: %Sc%nBirth: %SB%n" !* root@FreeBSD:/home/denis # mystat SRC/random SRC/random Access: Oct 27 14:27:59 2020 Modify: Oct 22 15:34:07 2020 Change: Oct 22 15:34:09 2020 Birth: Oct 22 15:34:07 2020 root@FreeBSD:/home/denis # mkdir DST root@FreeBSD:/home/denis # rsync -arv SRC/* DST sending incremental file list fd1 null pipe plain_zeroed random sparse_file SUB/ SUB/hard_linked_socket SUB/hard_linked_sparse_file SUB/symlink-broken -> random SUB/symlink-valid -> ../random dev/ dev/log -> /var/run/log sent 22,583,283 bytes received 129 bytes 45,166,824.00 bytes/sec total size is 22,577,179 speedup is 1.00 root@FreeBSD:/home/denis # mystat DST/random DST/random Access: Oct 27 14:28:53 2020 Modify: Oct 22 15:34:07 2020 Change: Oct 27 14:28:53 2020 Birth: Oct 22 15:34:07 2020 root@FreeBSD:/home/denis #

    So, birthtime is properly restored.

    Tar

    As done with previously, let's save and restore the SRC directory to DST... Note that by default no sparse file is taken into account (this is why we added the -S option), same with acl (so we added the --acl option) and Extended Attributes (unless --xattrs is added). The tar command-line becomes thus a bit longer:

    root@terre:/mnt/localdisk/Benchmark_tools# rm -rf DST root@terre:/mnt/localdisk/Benchmark_tools# cd SRC root@terre:/mnt/localdisk/Benchmark_tools/SRC# tar --acl --xattrs -cSf ../backup.tar * tar: SUB/hard_linked_socket: socket ignored tar: dev/log: socket ignored root@terre:/mnt/localdisk/Benchmark_tools/SRC# cd ../ root@terre:/mnt/localdisk/Benchmark_tools# mkdir DST root@terre:/mnt/localdisk/Benchmark_tools# cd DST root@terre:/mnt/localdisk/Benchmark_tools/DST# tar --acl --xattrs -xSf ../backup.tar root@terre:/mnt/localdisk/Benchmark_tools/DST# cd .. root@terre:/mnt/localdisk/Benchmark_tools#

    Now let's compare the restored data with the original:

    root@terre:/mnt/localdisk/Benchmark_tools# du -s SRC DST 2068 SRC 2068 DST root@terre:/mnt/localdisk/Benchmark_tools#

    The sparse file has been properly restored (thanks to the -S option for that) but not space optimization has been performed.

    root@terre:/mnt/localdisk/Benchmark_tools# ls -iRl SRC DST DST: total 12060 414841 drwxr-xr-x 2 root root 4096 Oct 28 11:09 SUB 414846 drwxr-xr-x 2 root root 4096 Oct 22 11:09 dev 414847 brw-r--r-- 1 root root 2, 1 Oct 28 11:09 fd1 414848 crw-r--r-- 1 root root 3, 1 Oct 28 11:09 null 414849 prw-r--r-- 1 root root 0 Oct 28 11:09 pipe 414850 -rw-rwxr-- 1 root root 1048576 Oct 28 11:09 plain_zeroed 414852 -rw-r--r-- 1 nobody root 1048582 Oct 28 11:09 random 414843 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 sparse_file DST/SUB: total 10000 414845 prw-r--r-- 1 root root 0 Oct 28 11:09 hard_linked_pipe 414843 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 hard_linked_sparse_file 414842 lrwxrwxrwx 1 root root 6 Oct 28 11:09 symlink-broken -> random 414844 lrwxrwxrwx 1 bin daemon 9 Oct 28 11:09 symlink-valid -> ../random DST/dev: total 0 SRC: total 2064 411386 drwxr-xr-x 2 root root 4096 Oct 28 11:09 SUB 414836 drwxr-xr-x 2 root root 4096 Oct 22 11:09 dev 414835 brw-r--r-- 1 root root 2, 1 Oct 28 11:09 fd1 414834 crw-r--r-- 1 root root 3, 1 Oct 28 11:09 null 414832 prw-r--r-- 2 root root 0 Oct 28 11:09 pipe 414826 -rw-rwxr--+ 1 root root 1048576 Oct 28 11:09 plain_zeroed 414827 -rw-r--r-- 1 nobody root 1048582 Oct 28 11:09 random 414828 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 sparse_file SRC/SUB: total 4 414832 prw-r--r-- 2 root root 0 Oct 28 11:09 hard_linked_pipe 414837 srw-rw-rw- 2 root root 0 Oct 12 23:00 hard_linked_socket 414828 -rw-r--r-- 2 root root 10240000 Oct 28 11:09 hard_linked_sparse_file 414830 lrwxrwxrwx 1 root root 6 Oct 28 11:09 symlink-broken -> random 414831 lrwxrwxrwx 1 bin daemon 9 Oct 28 11:09 symlink-valid -> ../random SRC/dev: total 0 414837 srw-rw-rw- 2 root root 0 Oct 12 23:00 log root@terre:/mnt/localdisk/Benchmark_tools#

    The warning was not vain, SUB/hard_linked_socket and log are missing in DST. This is however a minor problem as usually unix sockets get recreated by the process using them. However we might have some permission and ownership to set back, by hand. A possible use case is syslog daemon, when let available for a chrooted process or container (MTA, or other network service).

    The second problem is a bit more annoying: the hard linked fifo (aka named pipe) is silently restored as two independent named pipes (the inode number are different in the first column for pipe and SUB/hard_linked_pipe and their respective link count was 2 in SRC but is now 1 in DST. If two processes in different namespaces or chrooted environment, exchange data by mean of such hardlinked pipe, after restoration, if you are not aware of this failure, it will thus be difficult to identify why the two processes are just locked out, one waiting for data that will never come from the pipe, the other stuck for the pipe to be read.

    Let's continue by checking the file's metadata:

    root@terre:/mnt/localdisk/Benchmark_tools# getfacl SRC/plain_zeroed DST/plain_zeroed # file: SRC/plain_zeroed # owner: root # group: root user::rw- user:nobody:rwx group::r-- mask::rwx other::r-- # file: DST/plain_zeroed # owner: root # group: root user::rw- user:nobody:rwx group::r-- mask::rwx other::r-- root@terre:/mnt/localdisk/Benchmark_tools# getfattr -d SRC/plain_zeroed DST/plain_zeroed # file: SRC/plain_zeroed user.hello="hello world!!!" # file: DST/plain_zeroed user.hello="hello world!!!" root@terre:/mnt/localdisk/Benchmark_tools# lsattr SRC/plain_zeroed DST/plain_zeroed s---i-d-------e---- SRC/plain_zeroed --------------e---- DST/plain_zeroed root@terre:/mnt/localdisk/Benchmark_tools#

    Note that without --xattrs at creation time the timestamp accuracy of tar is 1 second: root@terre:/mnt/localdisk/Benchmark_tools# stat SRC/random DST/random File: SRC/random Size: 1048576 Blocks: 2048 IO Block: 4096 regular file Device: 802h/2050d Inode: 414841 Links: 1 Access: (0644/-rw-r--r--) Uid: (65534/ nobody) Gid: ( 0/ root) Access: 2020-10-27 14:03:46.064046436 +0100 Modify: 2020-10-27 14:03:42.016050420 +0100 Change: 2020-10-27 14:03:44.048048418 +0100 Birth: - File: DST/random Size: 1048576 Blocks: 2048 IO Block: 4096 regular file Device: 802h/2050d Inode: 414890 Links: 1 Access: (0644/-rw-r--r--) Uid: (65534/ nobody) Gid: ( 0/ root) Access: 2020-10-27 19:08:14.932424226 +0100 Modify: 2020-10-27 14:03:42.000000000 +0100 Change: 2020-10-27 19:08:14.932424226 +0100 Birth: - root@terre:/mnt/localdisk/Benchmark_tools#

    From the above output we see that:

    • permission are restored,
    • user and group ownership are restored too,
    • mtime is restored but it needs --xattrs to take into account today's system common time accuracy of one nanosecond
    • ACL are restored,
    • Extended Attributes are restored

    But

    • filesystem attributes are not restored,
    • atime is not restored,
    • ctime is not restored

    For the last date, birthtime again we will perform the test under FreeBSD:

    root@FreeBSD:~denis # which mystat mystat: aliased to stat -f "%N%nAccess: %Sa%nModify: %Sm%nChange: %Sc%nBirth: %SB%n" !* root@FreeBSD:~denis # mystat SRC/random SRC/random Access: Oct 27 19:40:13 2020 Modify: Oct 22 15:34:07 2020 Change: Oct 22 15:34:09 2020 Birth: Oct 22 15:34:07 2020 root@FreeBSD:~denis # cd SRC root@FreeBSD:~denis/SRC # gtar -cf ../backup.tar random root@FreeBSD:~denis/SRC # cd .. root@FreeBSD:~denis # mkdir DST root@FreeBSD:~denis # cd DST root@FreeBSD:~denis/DST # tar -xf ../backup.tar root@FreeBSD:~denis/DST # cd .. root@FreeBSD:~denis # mystat DST/random DST/random Access: Oct 28 15:43:30 2020 Modify: Oct 22 15:34:07 2020 Change: Oct 28 15:43:30 2020 Birth: Oct 22 15:34:07 2020 root@FreeBSD:~denis #

    gtar saved and restored the birthtime

    Feature set

    Historization

    To evaluate this feature, in a first time we will create two files A.txt and B.txt and make a first backup. Then we remove A.txt and add C.txt then make a second backup. We should be able to restore the data in both states (A+B and B+C). To simplify the operation we use the historization_feature script described at the end of this document.

    Dar
    root@terre:/mnt/memdisk# rm -rf SRC root@terre:/mnt/memdisk# ./historization_feature SRC phase1 root@terre:/mnt/memdisk# dar -c full -g SRC -q root@terre:/mnt/memdisk# ./historization_feature SRC phase2 root@terre:/mnt/memdisk# dar -c diff -A full -g SRC -q root@terre:/mnt/memdisk# mkdir DST root@terre:/mnt/memdisk# dar -x full -R DST -q root@terre:/mnt/memdisk# ls -lR DST DST: total 0 drwxr-xr-x 2 root root 80 Nov 6 18:37 SRC DST/SRC: total 8 -rw-r--r-- 1 root root 13 Nov 6 18:37 A.txt -rw-r--r-- 1 root root 24 Nov 6 18:37 B.txt root@terre:/mnt/memdisk# dar -x diff -R DST -w -q root@terre:/mnt/memdisk# ls -lR DST DST: total 0 drwxr-xr-x 2 root root 80 Nov 6 18:38 SRC DST/SRC: total 8 -rw-r--r-- 1 root root 24 Nov 6 18:37 B.txt -rw-r--r-- 1 root root 21 Nov 6 18:38 C.txt root@terre:/mnt/memdisk#

    Historization is present, we can get back from backup both saved states

    In complement dar proposes a manager dar_manager to easily locate file's status between the archives the database has been feeded with, as well as the file's data present in each archive:

    root@terre:/mnt/memdisk# dar_manager -C base.dmd root@terre:/mnt/memdisk# dar_manager -B base.dmd -A full root@terre:/mnt/memdisk# dar_manager -B base.dmd -A diff root@terre:/mnt/memdisk# dar_manager -B base.dmd -f SRC/A.txt 1 Fri Nov 6 18:37:51 2020 saved absent 2 Fri Nov 6 18:38:04 2020 removed absent root@terre:/mnt/memdisk# dar_manager -B base.dmd -f SRC/B.txt 1 Fri Nov 6 18:37:51 2020 saved absent 2 Fri Nov 6 18:37:51 2020 present absent root@terre:/mnt/memdisk# dar_manager -B base.dmd -f SRC/C.txt 2 Fri Nov 6 18:38:04 2020 saved absent root@terre:/mnt/memdisk# dar_manager -B base.dmd -l dar path : dar options : database version: 5 compression used: gzip archive # | path | basename ------------+--------------+--------------- 1 . full 2 . diff root@terre:/mnt/memdisk# dar_manager -B base.dmd -u 1 [ Saved ][ ] SRC/B.txt [ Saved ][ ] SRC/A.txt root@terre:/mnt/memdisk# dar_manager -B base.dmd -u 2 [ Saved ][ ] SRC [ Saved ][ ] SRC/C.txt root@terre:/mnt/memdisk#

    dar_manager can even take for you the actions to invoke dar as many time as necessary get the file's status of a given date for a given set of subset of the saved files:

    root@terre:/mnt/memdisk# dar_manager -v -B base.dmd -e "-R DST -w" -r SRC Decompressing and loading database to memory... Looking in archives for requested files, classifying files archive by archive... Checking chronological ordering of files between the archives... File recorded as removed at this date in database: SRC/A.txt CALLING DAR: restoring 1 files from archive ./full using anonymous pipe to transmit configuration to the dar process Arguments sent through anonymous pipe are: dar -x ./full -R DST -w -g SRC/B.txt -------------------------------------------- 2 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 1 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 3 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- CALLING DAR: restoring 2 files from archive ./diff using anonymous pipe to transmit configuration to the dar process Arguments sent through anonymous pipe are: dar -x ./diff -R DST -w -g SRC -g SRC/C.txt Error while restoring /mnt/memdisk/DST/SRC/A.txt : Cannot remove non-existent file from filesystem: /mnt/memdisk/DST/SRC/A.txt -------------------------------------------- 2 inode(s) restored including 0 hard link(s) 1 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 1 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 4 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- Final memory cleanup... All files asked could not be restored DAR sub-process has terminated with exit code 5 Continue anyway ? [return = YES | Esc = NO] Continuing... root@terre:/mnt/memdisk# ls -lR DST DST: total 0 drwxr-xr-x 2 root root 80 Nov 6 18:38 SRC DST/SRC: total 8 -rw-r--r-- 1 root root 24 Nov 6 18:37 B.txt -rw-r--r-- 1 root root 21 Nov 6 18:38 C.txt root@terre:/mnt/memdisk#
    Rsync
    root@terre:/mnt/memdisk# ./historization_feature SRC phase1 root@terre:/mnt/memdisk# rsync -arvHAX SRC DST sending incremental file list created directory DST SRC/ SRC/A.txt SRC/B.txt sent 229 bytes received 84 bytes 626.00 bytes/sec total size is 37 speedup is 0.12 root@terre:/mnt/memdisk# ./historization_feature SRC phase2 root@terre:/mnt/memdisk# rsync -arvHAX SRC DST sending incremental file list SRC/ SRC/C.txt sent 172 bytes received 39 bytes 422.00 bytes/sec total size is 45 speedup is 0.21 root@terre:/mnt/memdisk# ls -l total 4 drwxr-xr-x 3 root root 60 Nov 6 17:06 DST drwxr-xr-x 2 root root 80 Nov 6 17:06 SRC -rwxr--r-- 1 root root 589 Nov 6 16:32 historization_feature root@terre:/mnt/memdisk# ls -l DST total 0 drwxr-xr-x 2 root root 100 Nov 6 17:06 SRC root@terre:/mnt/memdisk# ls -l DST/SRC total 12 -rw-r--r-- 1 root root 13 Nov 6 17:05 A.txt -rw-r--r-- 1 root root 24 Nov 6 17:05 B.txt -rw-r--r-- 1 root root 21 Nov 6 17:06 C.txt root@terre:/mnt/memdisk# rsync -arvHAX --delete SRC DST sending incremental file list deleting SRC/A.txt sent 101 bytes received 26 bytes 254.00 bytes/sec total size is 45 speedup is 0.35 root@terre:/mnt/memdisk# ls -l DST/SRC total 8 -rw-r--r-- 1 root root 24 Nov 6 17:05 B.txt -rw-r--r-- 1 root root 21 Nov 6 17:06 C.txt root@terre:/mnt/memdisk#

    the "backup" contains all three files, A.txt, B.txt and C.txt while the first and the later never existed at the same time. Such backup does not allow to have neither the state of the phase1 nor the state of the phase2.

    We added the --delete option and as result we got to be the phase2 state only. But then we cannot restore to the phase1 state as the file A.txt has been deleted from the backup.

    To have both states with rsync, we should call rsync to a different destination directory at each new backup time, which would consume a lot of space and would also defeat one of the main features of rsync which is its ability to synchronize two directories exchanging only the minimal information that was modified.

    Tar
    root@terre:/mnt/memdisk# rmdir SRC root@terre:/mnt/memdisk# ./historization_feature SRC phase1 root@terre:/mnt/memdisk# tar --listed-incremental=snapshot.file -cf full.tar SRC root@terre:/mnt/memdisk# ./historization_feature SRC phase2 root@terre:/mnt/memdisk# tar --listed-incremental=snapshot.file -cf diff.tar SRC root@terre:/mnt/memdisk# mkdir DST root@terre:/mnt/memdisk# cd DST root@terre:/mnt/memdisk/DST# tar --listed-incremental=snapshot.file -xf ../full.tar root@terre:/mnt/memdisk/DST# ls -l SRC total 8 -rw-r--r-- 1 root root 13 Nov 6 18:20 A.txt -rw-r--r-- 1 root root 24 Nov 6 18:20 B.txt root@terre:/mnt/memdisk/DST# tar --listed-incremental=snapshot.file -xf ../diff.tar root@terre:/mnt/memdisk/DST# ls -l SRC total 8 -rw-r--r-- 1 root root 24 Nov 6 18:20 B.txt -rw-r--r-- 1 root root 21 Nov 6 18:21 C.txt root@terre:/mnt/memdisk/DST#

    We could restore from backup both the phase1 and phase2 status, historization is available with tar.

    Data filtering by directory

    Dar

    We want to save /lib except the content of /lib/modules:

    root@terre:/mnt/memdisk# dar -c backup -R /lib -P modules -vs -q Skipping file: /lib/modules root@terre:/mnt/memdisk#

    What if we want to exclude all of to exclude /lib/module except /lib/module/4.19.0-12-amd64?

    root@terre:/mnt/memdisk# rm backup.1.dar rm: remove regular file 'backup.1.dar'? y root@terre:/mnt/memdisk# dar -c backup -R /lib -am -P modules -g modules/4.19.0-12-amd64 -vs -q Skipping file: /lib/modules/4.19.0-11-amd64 Skipping file: /lib/modules/4.19.0-10-amd64 root@terre:/mnt/memdisk#

    OK, we can mix included directories and excluded directories

    Rsync
    root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# mkdir DST root@terre:/mnt/memdisk# rsync -arHAXS --exclude /lib/modules /lib DST root@terre:/mnt/memdisk# ls -ld DST/lib/m* drwxr-xr-x 2 root root 80 Jun 11 23:33 DST/lib/modprobe.d root@terre:/mnt/memdisk# root@terre:/mnt/memdisk# ls -l DST/lib/modules ls: cannot access 'DST/lib/modules': No such file or directory root@terre:/mnt/memdisk#

    We could exclude /lib/modules as expected. As previously, let's exclude it except /lib/modules/4.19.0-12-amd64:

    root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# rsync -arHAXS -f "+ /lib/modules" -f "- /lib/modules/4.19.0-12-amd64" /lib DST root@terre:/mnt/memdisk# la DST/lib/modules/ total 0 drwxr-xr-x 4 root root 80 Oct 22 10:33 . drwxr-xr-x 19 root root 420 Oct 22 11:20 .. drwxr-xr-x 3 root root 280 Aug 8 12:58 4.19.0-10-amd64 drwxr-xr-x 3 root root 280 Oct 12 11:25 4.19.0-11-amd64 root@terre:/mnt/memdisk#

    OK, we can mix included directories and excluded directories

    Tar

    Let's save /lib and excluding /lib/module again:

    root@terre:/mnt/memdisk# tar --exclude /lib/modules -cf backup.tar /lib tar: Removing leading `/' from member names root@terre:/mnt/memdisk# tar -tf backup.tar | grep modules root@terre:/mnt/memdisk#

    Now let's exclude /lib/modules except /lib/modules/4.19.0-12-amd64:

    root@terre:/mnt/memdisk# tar -cf backup.tar /lib/modules/4.19.0-12-amd64/ --exclude /lib/modules /lib tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets root@terre:/mnt/memdisk# tar -tf backup.tar | wc -l 6017 root@terre:/mnt/memdisk# tar -tf backup.tar | grep -v "lib/modules/4.19.0-12-amd64" | wc -l 1626 root@terre:/mnt/memdisk# tar -tf backup.tar | grep "lib/modules/4.19.0-12-amd64" | wc -l 4391 root@terre:/mnt/memdisk# tar -tf backup.tar | grep "lib/modules" | wc -l 4391 root@terre:/mnt/memdisk#

    The backup contains a total of 6017 entries, 1626 are out of the lib/modules/4.19.0-12-amd64 directory, the rest is all in that previous directory, nothing else is found in lib/modules while there was lib/modules/4.19.0-11-amd64 and lib/modules/4.19.0-10-amd64 subdirectory. We can thus mix included and included directories.

    Data filtering by filename

    Dar
    root@terre:/mnt/memdisk# dar -c backup -R /lib -X "*.ko" -------------------------------------------- 4122 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 10677 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 14799 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 3945 inode(s) -------------------------------------------- root@terre:/mnt/memdisk# mkdir DST root@terre:/mnt/memdisk# dar -x backup -R DST --fsa-scope none -------------------------------------------- 4122 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 4122 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- root@terre:/mnt/memdisk# find DST -name "*.ko" -ls root@terre:/mnt/memdisk#

    we would exclude all file having the ko extension, what if we do not want to exclude those that start with ext?

    root@terre:/mnt/memdisk# dar -c backup -R /lib -am -X "*.ko" -I "ext*" -------------------------------------------- 4128 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 10671 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 14799 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 3951 inode(s) -------------------------------------------- root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# mkdir DST root@terre:/mnt/memdisk# dar -x backup -R DST --fsa-scope none -q root@terre:/mnt/memdisk# find DST -name "*.ko" -print DST/modules/4.19.0-10-amd64/kernel/fs/ext4/ext4.ko DST/modules/4.19.0-10-amd64/kernel/drivers/extcon/extcon-core.ko DST/modules/4.19.0-11-amd64/kernel/fs/ext4/ext4.ko DST/modules/4.19.0-11-amd64/kernel/drivers/extcon/extcon-core.ko DST/modules/4.19.0-12-amd64/kernel/fs/ext4/ext4.ko DST/modules/4.19.0-12-amd64/kernel/drivers/extcon/extcon-core.ko root@terre:/mnt/memdisk#

    OK, we got what we wanted

    Rsync
    root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# rsync -arHAXS -f "- *.ko" /lib DST root@terre:/mnt/memdisk# find DST -name "*.ko" -print root@terre:/mnt/memdisk# ls DST lib root@terre:/mnt/memdisk#

    Same as previously, we don't want to exclude ko files starting by ext:

    root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# rsync -arHAXS -f "+ ext*" -f "- *.ko" /lib DST root@terre:/mnt/memdisk# find DST -name "*.ko" -print DST/lib/modules/4.19.0-12-amd64/kernel/fs/ext4/ext4.ko DST/lib/modules/4.19.0-12-amd64/kernel/drivers/extcon/extcon-core.ko DST/lib/modules/4.19.0-11-amd64/kernel/fs/ext4/ext4.ko DST/lib/modules/4.19.0-11-amd64/kernel/drivers/extcon/extcon-core.ko DST/lib/modules/4.19.0-10-amd64/kernel/fs/ext4/ext4.ko DST/lib/modules/4.19.0-10-amd64/kernel/drivers/extcon/extcon-core.ko root@terre:/mnt/memdisk#

    OK, we got what we wanted

    Tar

    Same as previously, let's filter out kernel object files

    root@terre:/mnt/memdisk# tar -cf backup.tar --exclude "*.ko" /lib tar: Removing leading `/' from member names root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# mkdir DST root@terre:/mnt/memdisk# cd DST root@terre:/mnt/memdisk/DST# tar -xf ../backup.tar root@terre:/mnt/memdisk/DST# find . -name "*.ko" -print root@terre:/mnt/memdisk/DST#

    Now, we want to keep only those kernel object files starting with ext

    root@terre:/mnt/memdisk# tar -cf backup.tar "ext*" --exclude "*.ko" /lib tar: ext*: Cannot stat: No such file or directory tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets tar: Exiting with failure status due to previous errors root@terre:/mnt/memdisk#

    Well, argument passed out of option do not seem expanded by tar thus using mask is not possible to include some pattern. It seems the only option is to use file listing, thing we will evaluate below.

    Data filtering by filesystem

    We will use a tmpfs filesystem mounted twice thanks to mount's --bind option. The objective is first to save every thing except a few given filesystems, or only one or save inside a few given filesystems. Here is the preparation phase:

    root@terre:/mnt/memdisk# mkdir SRC root@terre:/mnt/memdisk# mkdir SRC/D1 SRC/D2 SRC/D3 root@terre:/mnt/memdisk# mount -t tmpfs tmpfs SRC/D1 root@terre:/mnt/memdisk# mount --bind SRC/D1 SRC/D2 root@terre:/mnt/memdisk# mount -t tmpfs tmpfs SRC/D3 root@terre:/mnt/memdisk# ls SRC/D1 SRC/D2 SRC/D1: SRC/D2: root@terre:/mnt/memdisk# echo "Hello World" > SRC/D1/file.txt root@terre:/mnt/memdisk# ls SRC/D1 SRC/D2 SRC/D1: file.txt SRC/D2: file.txt root@terre:/mnt/memdisk# echo "give me your data, I'll tell your needs and what to buy" > SRC/gafam.com root@terre:/mnt/memdisk# echo "sight" > SRC/D3/democracy.org root@terre:/mnt/memdisk#
    Dar
    root@terre:/mnt/memdisk# dar -c backup -R SRC -MX:/mnt/memdisk/SRC/D1 -vs -vt -q Adding folder to archive: /mnt/memdisk/SRC/D3 Adding file to archive: /mnt/memdisk/SRC/D3/democracy.org Adding file to archive: /mnt/memdisk/SRC/gafam.com Skipping file: /mnt/memdisk/SRC/D2 Skipping file: /mnt/memdisk/SRC/D1 root@terre:/mnt/memdisk#

    We could exclude a filesystem, and its second appearance in D2 was also excluded, whithout having to mention it. Let's include only D1 now:

    root@terre:/mnt/memdisk# rm -f backup.1.dar root@terre:/mnt/memdisk# dar -c backup -R SRC -MI:/mnt/memdisk/SRC/D1 -vs -vt -q Skipping file: /mnt/memdisk/SRC/D3 Adding file to archive: /mnt/memdisk/SRC/gafam.com Adding folder to archive: /mnt/memdisk/SRC/D2 Adding file to archive: /mnt/memdisk/SRC/D2/file.txt Adding folder to archive: /mnt/memdisk/SRC/D1 Adding file to archive: /mnt/memdisk/SRC/D1/file.txt root@terre:/mnt/memdisk#

    OK, we got what we wanted

    Rsync
    root@terre:/mnt/memdisk# rsync -arvHAXS --one-file-system SRC DST sending incremental file list created directory DST SRC/ SRC/gafam.com SRC/D1/ SRC/D2/ SRC/D3/ sent 283 bytes received 77 bytes 720.00 bytes/sec total size is 56 speedup is 0.16 root@terre:/mnt/memdisk#

    rsync has ony one option about filesystems, it sticks recursion to the filesystem of the source directory, we cannot exclude specifically some filesystems, they are all excluded, and we cannot include specifically some filesystems, none is excluded (default behavior without this option)

    Tar
    root@terre:/mnt/memdisk# tar -cvf backup.tar --one-file-system SRC SRC/ SRC/D3/ tar: SRC/D3/: file is on a different filesystem; not dumped SRC/gafam.com SRC/D2/ tar: SRC/D2/: file is on a different filesystem; not dumped SRC/D1/ tar: SRC/D1/: file is on a different filesystem; not dumped root@terre:/mnt/memdisk#

    tar does not behaves better than rsync on that topic

    Data filtering by tag

    by tag we mean any mark the user can add to a file that will drive its fate when backup will be done. The most common is the dump flag, but it is not always available, using some other mechanisms (Extended Attributes,...) can be an interesting alternative.

    Dar
    root@terre:/var/tmp# mkdir SRC root@terre:/var/tmp# echo "Hello" > file1.txt root@terre:/var/tmp# echo "World" > file2.txt root@terre:/var/tmp# chattr +d file1.txt root@terre:/var/tmp# setfattr -n user.no_dump file2.txt root@terre:/var/tmp# mv file1.txt file2.txt SRC root@terre:/var/tmp# dar -c backup -w -R SRC --nodump -vt -q Adding file to archive: /var/tmp/SRC/file2.txt Saving Extended Attributes for /var/tmp/SRC/file2.txt Saving Filesystem Specific Attributes for /var/tmp/SRC/file2.txt root@terre:/var/tmp# dar -c backup -w -R SRC --exclude-by-ea=user.no_dump -vt -q Adding file to archive: /var/tmp/SRC/file1.txt Saving Filesystem Specific Attributes for /var/tmp/SRC/file1.txt root@terre:/var/tmp#

    We have two mechanisms, one based on the dump flag and an arbitrary extended attribute. However dar only supports exclusion of file, not inclusion for backup based on a tag.

    Rsync

    rsync does not seem to be able to filter based on an arbitrary mark

    Tar

    rsync does not seem to be able to filter based on an arbitrary mark

    Data filtering by files listing

    We build a file listing and expect to either have only those file saved or excluded from the performed backup. Here is the listing preparation:

    root@terre:/mnt/memdisk# find /lib -name "*.ko" -o -print > include.txt root@terre:/mnt/memdisk# wc -l include.txt 4123 include.txt root@terre:/mnt/memdisk# find /lib -name "*.ko" -print > exclude.txt root@terre:/mnt/memdisk# wc -l exclude.txt 10677 exclude.txt root@terre:/mnt/memdisk#
    Dar
    root@terre:/mnt/memdisk# dar -c backup -R /lib -[ include.txt -------------------------------------------- 4122 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 10677 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 14799 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 3945 inode(s) -------------------------------------------- root@terre:/mnt/memdisk#

    file inclusion is available, let's see file exclusion:

    root@terre:/mnt/memdisk# dar -c backup -R /lib -] exclude.txt -------------------------------------------- 4122 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 10677 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 14799 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 3945 inode(s) -------------------------------------------- root@terre:/mnt/memdisk#
    Rsync
    root@terre:/mnt/memdisk# rsync -aHAXS --files-from=include.txt / DST root@terre:/mnt/memdisk# find DST -print | wc -l 4124 root@terre:/mnt/memdisk# find DST -name "*.ko" -print root@terre:/mnt/memdisk#

    file inclusion is available. However if we can exclude a list of pattern defined in a file, we cannot exclude a list of files. We should prepend each entry by "- " seen the filtering syntax of rsync:

    root@terre:/mnt/memdisk# sed -r 's/^/- /' exclude.txt > rsync-exclude.txt root@terre:/mnt/memdisk# rm -rf DST root@terre:/mnt/memdisk# rsync -aHAXS --exclude-from=rsync-exclude.txt /lib DST root@terre:/mnt/memdisk# find DST -print | wc -l 4124 root@terre:/mnt/memdisk# find DST -name "*.ko" -print | wc -l 0 root@terre:/mnt/memdisk#

    So we are good with this sed additional listing adaptation.

    Tar
    root@terre:/mnt/memdisk# tar -cvf backup.tar --files-from=include.txt | wc -l tar: Removing leading `/' from member names 98017 root@terre:/mnt/memdisk# tar -tf backup.tar | grep .ko | wc -l 73392 root@terre:/mnt/memdisk# grep .ko include.txt | wc -l 3 root@terre:/mnt/memdisk# grep .ko include.txt /lib/modules/4.19.0-12-amd64/kernel/sound/pci/korg1212 /lib/modules/4.19.0-11-amd64/kernel/sound/pci/korg1212 /lib/modules/4.19.0-10-amd64/kernel/sound/pci/korg1212 root@terre:/mnt/memdisk#

    the include.txt file does not contain any file with the ko extension, however tar saved all of them. Reading back the man page concerning this --files-from option The names read are handled the same way as command line arguments explains that in the listing where all "*.ko" files have been removed, remain their parent directory, which implies saving all its content. In consequence we must not list directories only their content (which will restrict us saving empty directories as such). Let's modify the include.txt file that way:

    find /lib -type d -o -name "*.ko" -o -print > tar-include.txt root@terre:/mnt/memdisk# tar -cvf backup.tar --files-from=tar-include.txt | wc -l tar: Removing leading `/' from member names 1532 root@terre:/mnt/memdisk# tar -tvf backup.tar | grep "*.ko" root@terre:/mnt/memdisk# wc -l tar-include.txt 1532 tar-include.txt root@terre:/mnt/memdisk# wc -l include.txt 4123 include.txt

    the difference between the 1532 entries saved by tar and the 4123 saved by rsync or dar comes from the many empty directories that cannot be saved as such by tar using this method.

    root@terre:/mnt/memdisk# tar -cvf backup.tar --exclude-from=exclude.txt /lib | wc -l tar: Removing leading `/' from member names 4123 root@terre:/mnt/memdisk# wc -l exclude.txt 10677 exclude.txt root@terre:/mnt/memdisk# tar -tf backup.tar | egrep "\.ko$" root@terre:/mnt/memdisk#

    The file listing exclusion works as expected

    Slicing

    For this test we will backup the content of /usr/bin of the running system. We select a slice size smaller than the biggest file under backup. The use case for slicing implies compression (remote storage, cloud storage, limited removable media storage...).

    root@terre:/mnt/memdisk# ls -lh --sort=size /usr/bin | tac | tail -rwxr-xr-x 1 root root 8.0M Dec 18 2018 luajittex -rwxr-xr-x 1 root root 8.1M Dec 18 2018 luatex53 -rwxr-xr-x 1 root root 8.1M Dec 18 2018 luatex -rwxr-xr-x 1 root root 8.2M May 27 2019 wireshark -rwxr-xr-x 1 root root 12M Dec 21 2018 kstars -rwxr-xr-x 1 root root 15M Mar 12 2018 doxygen -rwxr-xr-x 1 root root 16M Jan 4 2019 stellarium -rwxr-xr-x 1 root root 19M Oct 12 19:46 mysql_embedded -rwxr-xr-x 1 root root 39M Sep 5 2019 emacs-gtk total 430M root@terre:/mnt/memdisk#
    Dar
    terre:/mnt/memdisk# dar -c backup -R /usr/bin -z6 -s 20M -q terre:/mnt/memdisk# ls -lh backup.* -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.1.dar -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.2.dar -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.3.dar -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.4.dar -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.5.dar -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.6.dar -rw-r--r-- 1 root root 20M Nov 13 11:30 backup.7.dar -rw-r--r-- 1 root root 7.7M Nov 13 11:30 backup.8.dar terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# dar -x backup -R DST -g emacs-gtk -E "echo openning slice %p/%b.%N.%e" openning slice /mnt/memdisk/backup.8.dar openning slice /mnt/memdisk/backup.4.dar openning slice /mnt/memdisk/backup.5.dar Restoration of FSA for /mnt/memdisk/DST/emacs-gtk aborted: Failed reading existing extX family FSA: Inappropriate ioctl for device Restoration of linux immutable FSA for /mnt/memdisk/DST/emacs-gtk aborted: Failed reading existing extX family FSA: Inappropriate ioctl for device -------------------------------------------- 1 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 2591 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 2592 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# diff DST/emacs-gtk /usr/bin/emacs-gtk memdiskerre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#

    We can also specify a different size for the first slice when using dar, this was used in the past to fulfill a disk partially filled by a previous incremental backup when saving onto CD-RW and DVD-RW, but that may still make sense when using USB keys or any other removable media.

    root@terre:/mnt/memdisk# dar -c backup -R /usr/bin -s 20M -S 1M -q --min-digit 3 root@terre:/mnt/memdisk# ls -lh total 361M -rw-r--r-- 1 root root 1.0M Nov 6 18:57 backup.001.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.002.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.003.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.004.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.005.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.006.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.007.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.008.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.009.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.010.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.011.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.012.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.013.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.014.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.015.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.016.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.017.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.018.dar -rw-r--r-- 1 root root 20M Nov 6 18:57 backup.019.dar root@terre:/mnt/memdisk# dar -c backup -R /usr/bin -s 20M -S 200M -q --min-digit 3 root@terre:/mnt/memdisk# ls -lh total 361M -rw-r--r-- 1 root root 200M Nov 6 18:58 backup.001.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.002.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.003.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.004.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.005.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.006.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.007.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.008.dar -rw-r--r-- 1 root root 20M Nov 6 18:59 backup.009.dar -rw-r--r-- 1 root root 913K Nov 6 18:59 backup.010.dar root@terre:/mnt/memdisk#
    Rsync

    rsync cannot split any file in slices, and it does not generate any backup, but it copies files. You cannot thus split data into slices to fit a particular restricted storage space.

    Tar
    root@terre:/mnt/memdisk# tar -czf backup.tar -M -L 20480 /usr/bin tar: Cannot use multi-volume compressed archives Try 'tar --help' or 'tar --usage' for more information. root@terre:/mnt/memdisk#

    As reported by tar above, if a multi-volume support exists, it is quite restrictive as one cannot use compression at the same time.

    terre:/mnt/memdisk# tar -cf backup -M -L 20480 /usr/bin tar: Removing leading `/' from member names Prepare volume #2 for 'backup' and hit return: tar: Removing leading `/' from hard link targets Prepare volume #3 for 'backup' and hit return: Prepare volume #4 for 'backup' and hit return: Prepare volume #5 for 'backup' and hit return: Prepare volume #6 for 'backup' and hit return: Prepare volume #7 for 'backup' and hit return: Prepare volume #8 for 'backup' and hit return: Prepare volume #9 for 'backup' and hit return: Prepare volume #10 for 'backup' and hit return: Prepare volume #11 for 'backup' and hit return: Prepare volume #12 for 'backup' and hit return: Prepare volume #13 for 'backup' and hit return: Prepare volume #14 for 'backup' and hit return: Prepare volume #15 for 'backup' and hit return: Prepare volume #16 for 'backup' and hit return: Prepare volume #17 for 'backup' and hit return: Prepare volume #18 for 'backup' and hit return: Prepare volume #19 for 'backup' and hit return: Prepare volume #20 for 'backup' and hit return: Prepare volume #21 for 'backup' and hit return: terre:/mnt/memdisk# terre:/mnt/memdisk# ls -l backup* -rw-r--r-- 1 root root 19527680 Nov 13 11:40 backup terre:/mnt/memdisk#

    But even without compression, tar is still restrictive: it does not produce different files, you have each new volume around and hit return at each time.
    Note also that without compression, the space required passes from 8 volumes with dar to 21 volumes with tar.

    The multi-volume support for tar seems well defined for local tape removable devices, but will cost more than twice more tape than what you can do with dar even if tape media is your only target. Here is an example with dar on how to write to mutli-volume and compressed backup to tape and pause between each volume as tar does:

    terre:/mnt/memdisk# dar -c backup -R /usr/bin -z6 -s 20M -E "echo writing volume %N to tape" -E "cat < %p/%b.%N.%e > /dev/mt" -p class=green>writing volume 1 to tape Finished writing to file 1, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 2 to tape Finished writing to file 2, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 3 to tape Finished writing to file 3, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 4 to tape Finished writing to file 4, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 5 to tape Finished writing to file 5, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 6 to tape Finished writing to file 6, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 7 to tape Finished writing to file 7, ready to continue ? [return = YES | Esc = NO] Continuing... writing volume 8 to tape -------------------------------------------- 2592 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2592 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2152 inode(s) -------------------------------------------- terre:/mnt/memdisk#

    Symmetric encryption

    Encryption has for target relatively long term lifetime, having compression at the same time to increase security as it increases data "randomness" of the data to cipher. So we will use both in our tests (gzip with a compresion level of 6).

    A point to pay attention to concerns the way the password/passphrase can be provided. Putting this to the command-line could let other users on this same system read it. Having interactive prompt is better as well as having the password set in a read access restricted file, which in addition allows automation.

    Dar
    root@terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -K aes256: -q -z6 Archive backup requires a password: Please confirm your password: root@terre:/mnt/memdisk# dar -l backup -q Archive backup requires a password: Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted Archive version format : 11 Compression algorithm used : gzip Compression block size used : 0 Symmetric key encryption used : AES 256 Asymmetric key encryption used : none Archive is signed : no Sequential reading marks : present User comment : N/A KDF iteration count : 10000 KDF hash algorithm : argon2 Salt size : 32 bytes Catalogue size in archive : 101907 bytes Archive is composed of 1 file(s) File size: 155070897 bytes The global data compression ratio is: 64% CATALOGUE CONTENTS : total number of inode : 2589 fully saved : 2589 binay delta patch : 0 inode metadata only : 0 distribution of inode(s) - directories : 2 - plain files : 2152 - symbolic links : 435 - named pipes : 0 - unix sockets : 0 - character devices : 0 - block devices : 0 - Door entries : 0 hard links information - number of inode with hard link : 5 - number of reference to hard linked inodes: 10 destroyed entries information 0 file(s) have been record as destroyed since backup of reference root@terre:/mnt/memdisk# touch pass.dcf root@terre:/mnt/memdisk# chmod go-rwx pass.dcf root@terre:/mnt/memdisk# cat >> pass.dcf -K "aes256:hello world!" root@terre:/mnt/memdisk# ls -l pass.dcf -rw------- 1 root root 25 Nov 9 11:37 pass.dcf root@terre:/mnt/memdisk# rm backup.1.dar rm: remove regular file 'backup.1.dar'? y root@terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -B pass.dcf -q -z6 root@terre:/mnt/memdisk# dar -l backup -q -B pass.dcf Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted Archive version format : 11 Compression algorithm used : gzip Compression block size used : 0 Symmetric key encryption used : AES 256 Asymmetric key encryption used : none Archive is signed : no Sequential reading marks : present User comment : N/A KDF iteration count : 10000 KDF hash algorithm : argon2 Salt size : 32 bytes Catalogue size in archive : 102310 bytes Archive is composed of 1 file(s) File size: 155132433 bytes The global data compression ratio is: 64% CATALOGUE CONTENTS : total number of inode : 2589 fully saved : 2589 binay delta patch : 0 inode metadata only : 0 distribution of inode(s) - directories : 2 - plain files : 2152 - symbolic links : 435 - named pipes : 0 - unix sockets : 0 - character devices : 0 - block devices : 0 - Door entries : 0 hard links information - number of inode with hard link : 5 - number of reference to hard linked inodes: 10 destroyed entries information 0 file(s) have been record as destroyed since backup of reference root@terre:/mnt/memdisk#

    We can provide password either on command-line (not recommended), prompted by dar once launched or from a protected configuration file. In the following we add slicing to encryption to see whether or not dar deciphers the whole backup to recover a single file:

    root@terre:/mnt/localdisk# rm -rf backup.* root@terre:/mnt/localdisk# dar -c backup -R / -g usr/bin -K aes256: -s 1M -q -z6 Archive backup requires a password: Please confirm your password: root@terre:/mnt/localdisk# ls -l backup.* | wc -l 148 root@terre:/mnt/localdisk# dar -x backup -g usr/bin/emacs-gtk -E "echo openning slice %b.%N.%e" -q openning slice backup.148.dar Archive backup requires a password: Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted openning slice backup.1.dar openning slice backup.80.dar openning slice backup.81.dar openning slice backup.82.dar openning slice backup.83.dar openning slice backup.84.dar root@terre:/mnt/localdisk#

    As seen above, dar does not need to uncipher nor uncompress the whole backup to recover a single file, the use of slicing let us see which slice it accessed to, but the behavior is the same without slicing and can be measure by the execution time (see the performance tests logs).

    Rsync

    rsync cannot cipher data, it can rely on ssh to cipher the data over the network but data is finally always stored in clear text.

    Tar

    There is no native support for ciphering with tar. You can however pipe tar's output to openssl to cipher the generated backup on fly as a whole.

    root@terre:/mnt/memdisk# tar -czf - /usr/bin | openssl enc -e -aes256 -out backup.tar.gz.crypted tar: Removing leading `/' from member names enter aes-256-cbc encryption password: Verifying - enter aes-256-cbc encryption password: *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. tar: Removing leading `/' from hard link targets root@terre:/mnt/memdisk# openssl enc -d -aes256 -in backup.tar.gz.crypted | tar -xz enter aes-256-cbc decryption password: *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. root@terre:/mnt/memdisk# tar -czf - /usr/bin | openssl enc -e -aes256 -out backup.tar.gz.crypted -pass file:pass.txt tar: Removing leading `/' from member names *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. tar: Removing leading `/' from hard link targets root@terre:/mnt/memdisk# openssl enc -d -aes256 -in backup.tar.gz.crypted -pass file:pass.txt | tar -xz *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. root@terre:/mnt/memdisk#

    with openssl, tar has both the ability to provide the password/passphrase from an interactive prompt and from a protected file. However you will have to remember which algorithm you used in adition to the passphrase. The ciphering being done as a whole, you will have to decipher the whole backup even to just restore a single file. If the backup is large, this may take a long time and may require to download a lot of stuff from a remote storage.

    We see that ciphering with tar is possible at the cost of some complex command-line. But this is error-prone as we see the shown warning that the key derivation function is deprecated and we should switch to another one. Moreover you will have to remember which key derivation function and its parameters in addition to the passphrase you provided and in addition to the ciphering algorithm used.

    Note: you can also use openssl with dar as we did for tar but it brings all the drawbacks we saw with tar

    Asymmetric encryption

    The objective is to create a backup ciphered using GnuPG public/private key pair, restore the whole backup and restore a single file from it. We will also use compression (gzip level 6) as it may make sense for the corresponding use cases (data exchange over Internet for example).

    Dar
    terre:/mnt/memdisk# dar -c backup -K gnupg::root@terre.systeme-solaire.espace -R SRC -z6 -q terre:/mnt/memdisk# dar -l backup -q Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted Archive version format : 11 Compression algorithm used : gzip Compression block size used : 0 Symmetric key encryption used : AES 256 Asymmetric key encryption used : gnupg Archive is signed : no Sequential reading marks : present User comment : N/A Catalogue size in archive : 68669 bytes Archive is composed of 1 file(s) File size: 158261425 bytes The global data compression ratio is: 64% CATALOGUE CONTENTS : total number of inode : 2593 fully saved : 2593 binay delta patch : 0 inode metadata only : 0 distribution of inode(s) - directories : 1 - plain files : 2157 - symbolic links : 435 - named pipes : 0 - unix sockets : 0 - character devices : 0 - block devices : 0 - Door entries : 0 hard links information - number of inode with hard link : 0 - number of reference to hard linked inodes: 0 destroyed entries information 0 file(s) have been record as destroyed since backup of reference terre:/mnt/memdisk# ls -l backup.1.dar -rw-r--r-- 1 root root 158261425 Nov 9 16:04 backup.1.dar terre:/mnt/memdisk#

    As displayed in the backup header output above the underlying encryption is a symmetric encryption (AES 256 by default), but the AES key is stored ciphered using the private key of the backup recipient which email address is provided (or email adresses, if more than one recipient is expected). This key is randomly chosen by dar and stored ciphered in the archive header. Thus the overall behavior, performance and security of GnuPG withing dar is equivalent to the one of the symmetrical algorithm chosen, with the ability to quickly restore some or all files from an archive, and not waiting/downloading first the whole backup to unciphered it.

    Seen above no password or passphrase is asked as the recipient email is ourselves (root@terre.systeme-solaire.espace). Let's cipher for another recipient:

    terre:/mnt/memdisk# dar -c backup -K gnupg::dar.linux@free.fr -R SRC -z6 -q -w terre:/mnt/memdisk# ls -l backup.1.dar -rw-r--r-- 1 root root 158230913 Nov 9 16:22 backup.1.dar terre:/mnt/memdisk# dar -l backup -q FATAL error, aborting operation: Unexpected error reported by GPGME: No secret key terre:/mnt/memdisk# dar -c backup -K gnupg::dar.linux@free.fr,root@terre.systeme-solaire.espace -R SRC -z6 -q -w terre:/mnt/memdisk# dar -l backup -q Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted Archive version format : 11 Compression algorithm used : gzip Compression block size used : 0 Symmetric key encryption used : AES 256 Asymmetric key encryption used : gnupg Archive is signed : no Sequential reading marks : present User comment : N/A Catalogue size in archive : 68624 bytes Archive is composed of 1 file(s) File size: 158252223 bytes The global data compression ratio is: 64% CATALOGUE CONTENTS : total number of inode : 2593 fully saved : 2593 binay delta patch : 0 inode metadata only : 0 distribution of inode(s) - directories : 1 - plain files : 2157 - symbolic links : 435 - named pipes : 0 - unix sockets : 0 - character devices : 0 - block devices : 0 - Door entries : 0 hard links information - number of inode with hard link : 0 - number of reference to hard linked inodes: 0 destroyed entries information 0 file(s) have been record as destroyed since backup of reference terre:/mnt/memdisk#

    Here we saw that ciphering for a recipient different than ourself does not allow us to read the resulting backup, however we can define several recipients and if we add ourself, we can read the backup as well as our primary recipients.

    Rsync
    rsync is not able to perform asymmetric encryption of backed up files.
    Tar

    Tar cannot hold asymmetrical encryption alone, as for symmetrical encryption we must use an external tool that performes the ciphering operation outside the backup.

    terre:/mnt/memdisk# tar -czf - SRC | gpg --encrypt --recipient root@terre.systeme-solaire.espace --output backup.tar.gz.gpg terre:/mnt/memdisk# ls -l backup.tar.gz.gpg -rw-r--r-- 1 root root 155337814 Nov 9 16:45 backup.tar.gz.gpg terre:/mnt/memdisk# terre:/mnt/memdisk# gpg --decrypt backup.tar.gz.gpg | tar -xzf - gpg: encrypted with 3072-bit RSA key, ID 97E13D38B007DF30, created 2020-08-08 "root@terre <root@terre.systeme-solaire.espace>" terre:/mnt/memdisk# terre:/mnt/memdisk# tar -czf - SRC | gpg --encrypt --recipient dar.linux@free.fr --output backup.tar.gz.gpg terre:/mnt/memdisk# gpg --decrypt backup.tar.gz.gpg | tar -xzf - gpg: encrypted with 4096-bit RSA key, ID DB0A2141A4D96ECA, created 2012-09-13 "Denis Corbin (http://dar.linux.free.fr/) <dar.linux@free.fr>" gpg: decryption failed: No secret key gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now terre:/mnt/memdisk#terre:/mnt/memdisk# tar -czf - SRC | gpg --encrypt --recipient dar.linux@free.fr \       --recipient root@terre.systeme-solaire.espace --output backup.tar.gz.gpg terre:/mnt/memdisk# gpg --decrypt backup.tar.gz.gpg | tar -xzf - gpg: encrypted with 4096-bit RSA key, ID DB0A2141A4D96ECA, created 2012-09-13 "Denis Corbin (http://dar.linux.free.fr/) <dar.linux@free.fr>" gpg: encrypted with 3072-bit RSA key, ID 97E13D38B007DF30, created 2020-08-08 "root@terre <root@terre.systeme-solaire.espace>" terre:/mnt/memdisk#

    Same as for symmetric encryption, the fact that the whole backup is ciphered at once implies to download back the whole backup even to recover just one file.

    Protection against plain-text attack

    Dar
    devuan:/mnt/memdisk# time dar -c backup -K "aes256:hello world!" -at -1 0 -R SRC -q -w 9.782u 3.413s 0:06.28 210.0% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# la backup.1.dar -rw-r--r-- 1 root root 1572706497 Nov 9 14:50 backup.1.dar devuan:/mnt/memdisk# time dar -c backup -K "aes256:hello world!" -at -1 0 -R SRC -q -w 9.173u 2.845s 0:05.50 218.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# la backup.1.dar -rw-r--r-- 1 root root 1572655217 Nov 9 14:50 backup.1.dar devuan:/mnt/memdisk#

    When ciphering the same data several times (with symmetric or asymmetric encryption), the resulting backup size changes each time. This is due to the garbage (the elastic buffer) dar adds at the beginnning and at the end of the data to cipher. This way, even if a dar backup has well known structure it is not easy to know precisely where they are positionned in the backup file, which makes plain-text attack much more difficult to succeed if even possible in a reasonable time.

    Rsync

    rsync does not provide any way to cipher the backup, it is thus not concerned by protecting against plain-text attack.

    Tar
    devuan:/mnt/memdisk/SRC# time ../tar.backup ../backup.tar.crypted usr 4.112u 2.343s 0:04.72 136.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/SRC# ls -l ../bac backup.1.dar backup.tar.crypted devuan:/mnt/memdisk/SRC# ls -l ../backup.tar.crypted -rw-r--r-- 1 root root 1603594272 Nov 9 14:56 ../backup.tar.crypted devuan:/mnt/memdisk/SRC# time ../tar.backup ../backup.tar.crypted usr 3.952u 2.564s 0:04.79 135.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/SRC# ls -l ../backup.tar.crypted -rw-r--r-- 1 root root 1603594272 Nov 9 14:56 ../backup.tar.crypted devuan:/mnt/memdisk/SRC#

    tar by itself does not provide any ciphering mechanism, however you can cipher the tar generated backups with external tool (for example openssl for symmetric encryption or gpg for asymmetric encryption). However none of these mechanism protect against plain-text attack: tar backup have somehow predictable header contents.

    Key Derivation Function

    Dar

    dar uses argon2 by default, with 10,000 iterations. It can also use pkcs5 v2 (pbkdf2) with md5, sha1 or sha512 algorithm. The user is able to set the KDF function and iteration count, so we are able to measure the execution time variation added by the iteration count (taking into account that the data to cipher also changes depending on the amount of random garbage dar wraps it with):

    terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 100k:sha1 -w -q 4.904u 0.572s 0:05.49 99.6% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 500k:sha1 -w -q 5.805u 0.272s 0:06.08 99.8% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 1M:sha1 -w -q 6.852u 0.308s 0:07.18 99.5% 0+0k 0+0io 0pf+0w time dar -c backup -R SRC -K aes:hello --kdf-param 10k:argon2 -w -q 5.092u 0.870s 0:03.50 170.2% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 10k:argon2 -w -q 5.232u 0.760s 0:03.54 169.2% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 20k:argon2 -w -q 5.778u 0.822s 0:04.14 159.1% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 100k:argon2 -w -q 10.613u 0.831s 0:09.00 127.1% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# time dar -c backup -R SRC -K aes:hello --kdf-param 1M:argon2 -w -q 66.862u 0.666s 1:05.14 103.6% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk#
    Rsync

    rsync does not provide any way to cipher the backup, it is not concerned by KDF.

    Tar

    As of today (year 2020) openssl only supports PBKDF2: no support for argon2 is available. Argon2 was the winner of the Password Hashing Competition in July 2015. PBKDF2 has been published by the IETF in September 2000 with the RCF 2898

    File change detection

    In order stress each backup software on that aspect, we will use an ugly script always_change that loops forever permanently invoking touch on a given file. For the test, we create a source tree to backup, containing a file of 1 MiB on which we will apply this script:

    terre:/mnt/memdisk# mkdir SRC terre:/mnt/memdisk# dd if=/dev/zero of=SRC/hello_world bs=10240 count=1024 1024+0 records in 1024+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.0107294 s, 977 MB/s terre:/mnt/memdisk# ./always_change SRC/hello_world & [1] 7433 terre:/mnt/memdisk# stat SRC/hello_world File: SRC/hello_world Size: 10485760 Blocks: 20480 IO Block: 4096 regular file Device: 1bh/27d Inode: 375588 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-11-10 16:34:13.806106695 +0100 Modify: 2020-11-10 16:34:13.806106695 +0100 Change: 2020-11-10 16:34:13.806106695 +0100 Birth: - terre:/mnt/memdisk# stat SRC/hello_world File: SRC/hello_world Size: 10485760 Blocks: 20480 IO Block: 4096 regular file Device: 1bh/27d Inode: 375588 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-11-10 16:34:14.838104981 +0100 Modify: 2020-11-10 16:34:14.838104981 +0100 Change: 2020-11-10 16:34:14.838104981 +0100 Birth: - terre:/mnt/memdisk# jobs [1] + Running ./always_change SRC/hello_world terre:/mnt/memdisk# ls -l SRC total 10240 -rw-r--r-- 1 root root 10485760 Nov 10 16:34 hello_world terre:/mnt/memdisk#
    Dar
    terre:/mnt/memdisk# dar -c backup -R SRC -q WARNING! File modified while reading it for backup, but no more retry allowed: /mnt/memdisk/SRC/hello_world terre:/mnt/memdisk# dar -l backup [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [DIRTY][ ] [---][ 99%][X] -rw-r--r-- 0 0 10 Mio Tue Nov 10 16:34:55 2020 hello_world terre:/mnt/memdisk# dar -x backup -R DST File /mnt/memdisk/DST/hello_world has changed during backup and is probably not saved in a valid state ("dirty file"), do you want to consider it for restoration anyway? [return = YES | Esc = NO] Continuing... -------------------------------------------- 1 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 1 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk#

    • dar detects properly the file change and issues a warning during the backup.
    • It even retries to save the file several times (3 times by default).
    • the resulting backup keeps trace of this context by flagging the file as DIRTY
    • When restoring the data, a warning shows (default behavior) and the user is requested for confirmation.

    Rsync
    terre:/mnt/memdisk# rsync -arvHAXqz --delete SRC DST terre:/mnt/memdisk#

    rsync does not shows anything nor behaves differently (no retry, no change notification).

    Tar
    terre:/mnt/memdisk# tar -cf backup.tar SRC tar: SRC/hello_world: file changed as we read it terre:/mnt/memdisk# tar -tvf backup.tar drwxr-xr-x root/root 0 2020-11-10 16:33 SRC/ -rw-r--r-- root/root 10485760 2020-11-10 16:41 SRC/hello_world terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xf ../backup.tar terre:/mnt/memdisk/DST# ls -l SRC total 10240 -rw-r--r-- 1 root root 10485760 Nov 10 16:41 hello_world terre:/mnt/memdisk/DST#

    • tar detects properly the file change and issues a warning during the backup.
    • it does not tries to save the file
    • the resulting backup keeps no visible trace of this possible data corruption
    • When restoring the data, no warning is issued and the restoration proceed as if the file was saved properly

    Multi-level backup

    For this test we make a full backup of a Linux source tree, then rename the Documentation directory as doc and make a differential backup of the whole. Renaming files is expected to do at worse the same as removing some and adding new ones, whe should not see all data saved again:

    Dar
    devuan:/mnt/memdisk# du -B1 -s SRC 1121144832 SRC devuan:/mnt/memdisk# dar -c full -R SRC -z6 -q devuan:/mnt/memdisk# cd SRC/linux-5.9.2/ devuan:/mnt/memdisk/SRC/linux-5.9.2# mv Documentation/ doc devuan:/mnt/memdisk/SRC/linux-5.9.2# cd ../.. devuan:/mnt/memdisk# dar -c diff -A full -R SRC -z6 -q devuan:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 17858927 Nov 1 18:18 diff.1.dar -rw-r--r-- 1 root root 219047658 Nov 1 18:14 full.1.dar devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# dar -x full -R DST -q devuan:/mnt/memdisk# dar -x diff -R DST -q -w devuan:/mnt/memdisk# diff -r SRC DST && echo "same data" || echo "different data" same data devuan:/mnt/memdisk#

    We can see that the restoration of the full and differential backup over it lead to the exact same directory tree as the source saved files.

    Rsync
    devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# rsync -arHAXz --delete --info=stats SRC/* BACKUP sent 214,380,105 bytes received 1,359,591 bytes 9,180,412.60 bytes/sec total size is 954,869,250 speedup is 4.43 devuan:/mnt/memdisk# cd SRC/linux-5.9.2/ devuan:/mnt/memdisk/SRC/linux-5.9.2# mv Documentation/ doc devuan:/mnt/memdisk/SRC/linux-5.9.2# cd ../.. devuan:/mnt/memdisk# rsync -arHAXz --delete --info=stats SRC/* BACKUP sent 12,923,292 bytes received 680,190 bytes 3,886,709.14 bytes/sec total size is 954,869,250 speedup is 70.19 devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# rsync -arHAXz --delete --info=stats BACKUP/* DST sent 214,371,610 bytes received 1,359,603 bytes 9,180,051.62 bytes/sec total size is 954,869,250 speedup is 4.43 devuan:/mnt/memdisk#

    We see that after the modification the amount of data pushed to the backup by rsync passes from 214 MiB to only 12 MiB we can consider this as a differential backup, thus this part of the multi-level backup aspect is addressed, but we have lost the access to the first backup: it has been overwritten by the new one, so we lose history but that's a different feature.

    Tar
    devuan:/mnt/memdisk# tar --listed-incremental=snapshot.file -czf full.tar.gz SRC devuan:/mnt/memdisk# cd SRC/linux-5.9.2/ devuan:/mnt/memdisk/SRC/linux-5.9.2# mv Documentation/ doc devuan:/mnt/memdisk/SRC/linux-5.9.2# cd ../.. devuan:/mnt/memdisk# tar --listed-incremental=snapshot.file -czf diff.tar.gz SRC devuan:/mnt/memdisk# ls -l total 190488 drwxr-xr-x 3 root root 60 Oct 31 19:37 SRC -rw-r--r-- 1 root root 9654445 Oct 31 19:49 diff.tar.gz -rw-r--r-- 1 root root 184036391 Oct 31 19:49 full.tar.gz -rw-r--r-- 1 root root 1361962 Oct 31 19:49 snapshot.file devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# tar --listed-incremental=/dev/null -xf ../full.tar.gz devuan:/mnt/memdisk/DST# tar --listed-incremental=/dev/null -xf ../diff.tar.gz devuan:/mnt/memdisk/DST# cd .. devuan:/mnt/memdisk# diff -r SRC DST/SRC && echo "same data" || echo "different data" same data devuan:/mnt/memdisk#

    Here too, we got the exact same directory as original and modified data

    Binary Delta

    To evaluate the ability the support for binary delta, we will make a first backup of a Debian ISO image, of which we will modify one bit using the bitflip script, then make a differential backup of it. We expect to see the differential backup not resaving the whole file, and though the restoration of the full and differential backup matching the modified file.

    Dar
    devuan:/mnt/memdisk# dar -c full -z6 -R SRC --delta sig -q devuan:/mnt/memdisk# ./bitflip 100000 SRC/debian-10.6.0-amd64-DVD-2.iso devuan:/mnt/memdisk# dar -c diff -A full -z6 -R SRC -q devuan:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 643 Nov 1 19:45 diff.1.dar -rw-r--r-- 1 root root 4704429776 Nov 1 19:05 full.1.dar devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# dar -x full -R DST -q devuan:/mnt/memdisk# dar -x diff -R DST -q devuan:/mnt/memdisk# diff -s SRC/debian-10.6.0-amd64-DVD-2.iso DST/debian-10.6.0-amd64-DVD-2.iso Files SRC/debian-10.6.0-amd64-DVD-2.iso and DST/debian-10.6.0-amd64-DVD-2.iso are identical devuan:/mnt/memdisk#

    For dar the backup we used:

    • 4.3 GiB for the full backup (compression ratio of 0,21 %)
    • 614 bytes for the differential backup (compression ratio of 99,99998%)
    Rsync
    devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# rsync -arHAX --info=stats SRC/* DST sent 4,688,066,109 bytes received 35 bytes 284,125,220.85 bytes/sec total size is 4,686,921,728 speedup is 1.00 devuan:/mnt/memdisk# ./bitflip 100000 SRC/debian-10.6.0-amd64-DVD-2.iso devuan:/mnt/memdisk# rsync -arHAX --info=stats SRC/* DST sent 4,688,066,109 bytes received 35 bytes 302,455,880.26 bytes/sec total size is 4,686,921,728 speedup is 1.00 devuan:/mnt/memdisk# rsync -arHAXt --info=stats --no-whole-file SRC/* DST sent 342,469 bytes received 547,803 bytes 30,178.71 bytes/sec total size is 4,686,921,728 speedup is 5,264.60 devuan:/mnt/memdisk# diff -s SRC/debian-10.6.0-amd64-DVD-2.iso DST/debian-10.6.0-amd64-DVD-2.iso Files SRC/debian-10.6.0-amd64-DVD-2.iso and DST/debian-10.6.0-amd64-DVD-2.iso are identical devuan:/mnt/memdisk#

    We had to use --no-whole-file to see the binary delta in action with rsync. This feature is not activated when copying on local disk as it does not makes sense (for rsync) because the computation time needed for the binary delta takes more time the the byte to byte copy and because rsync does not store just the delta (no backup history) but modifies the existing backup. Anyway, binary delta is supported (of course!) by rsync.

    Tar
    devuan:/mnt/memdisk# tar --listed-incremental=snapshot.file -czf full.tar.gz SRC devuan:/mnt/memdisk# ./bitflip 100000 SRC/debian-10.6.0-amd64-DVD-2.iso devuan:/mnt/memdisk# tar --listed-incremental=snapshot.file -czf diff.tar.gz SRC devuan:/mnt/memdisk# ls -l total 9133304 drwxr-xr-x 2 root root 40 Oct 31 17:31 SRC -rwxr--r-- 1 root root 460 Oct 31 16:34 bitflip -rw-r--r-- 1 root root 4676243904 Oct 31 17:28 diff.tar.gz -rw-r--r-- 1 root root 4676244172 Oct 31 17:24 full.tar.gz -rw-r--r-- 1 root root 107 Oct 31 17:28 snapshot.file devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# tar --listed-incremental=/dev/null -xf ../full.tar.gz devuan:/mnt/memdisk/DST# tar --listed-incremental=/dev/null -xf ../diff.tar.gz devuan:/mnt/memdisk/DST# diff -s ../SRC/debian-10.6.0-amd64-DVD-2.iso SRC/debian-10.6.0-amd64-DVD-2.iso Files ../SRC/debian-10.6.0-amd64-DVD-2.iso and SRC/debian-10.6.0-amd64-DVD-2.iso are identical devuan:/mnt/memdisk/DST#

    For tar the backup used:

    • 4.3 GiB for full backup (compression ratio of 0,22 %)
    • 4.3 GiB for the differential backup (compression ratio of 0,22 %)

    Binary delta is not supported by tar

    Detection suspicious modifications

    For this test we will use the hide_change script that rely on the bitflip script seen above and try to hide the modifications performed, as a virus, keylogger or rootkit would tend to do. We will make a full backup before the modification and a differential backup after, then observe the behavior.

    Here follows the script in action, we see no change using ls -l while stat shows the exact same information:

    • file size
    • block used
    • Inode number
    • permissions
    • user and group ownership
    • last access time
    • last modification time
    The only change concerns the inode change time (ctime) that cannot be set manually and signals that some inode properties (but no file content) has changed. This condition occurs, when changing the file permission, ownership, extended attributes and so on, but should not occur when only file's data has changed.

    terre:/mnt/memdisk# mkdir SRC terre:/mnt/memdisk# echo "Hello World!" > SRC/file.txt terre:/mnt/memdisk# cat SRC/file.txt Hello World! terre:/mnt/memdisk# ls -l SRC/file.txt -rw-r--r-- 1 root root 13 Nov 12 13:13 SRC/file.txt terre:/mnt/memdisk# stat SRC/file.txt File: SRC/file.txt Size: 13 Blocks: 8 IO Block: 4096 regular file Device: 1bh/27d Inode: 424690 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-11-12 13:13:19.021978762 +0100 Modify: 2020-11-12 13:13:09.213998852 +0100 Change: 2020-11-12 13:13:09.213998852 +0100 Birth: - terre:/mnt/memdisk# ./hide_change SRC/file.txt terre:/mnt/memdisk# ls -l SRC/file.txt -rw-r--r-- 1 root root 13 Nov 12 13:13 SRC/file.txt terre:/mnt/memdisk# stat SRC/file.txt File: SRC/file.txt Size: 13 Blocks: 8 IO Block: 4096 regular file Device: 1bh/27d Inode: 424690 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-11-12 13:13:19.021978762 +0100 Modify: 2020-11-12 13:13:09.213998852 +0100 Change: 2020-11-12 13:13:39.549936636 +0100 Birth: - terre:/mnt/memdisk# cat SRC/file.txt Lello World! terre:/mnt/memdisk#
    Dar
    terre:/mnt/memdisk# mkdir SRC terre:/mnt/memdisk# echo "Hello World!" > SRC/file.txt terre:/mnt/memdisk# dar -c full -R SRC -N -------------------------------------------- 1 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 1 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# ./hide_change SRC/file.txt terre:/mnt/memdisk# dar -c diff -A full -R SRC -N -q SECURITY WARNING! SUSPICIOUS FILE /mnt/memdisk/SRC/file.txt: ctime changed since archive of reference was done, while no other inode information changed terre:/mnt/memdisk#

    dar issues a warning because of this suspicious condition. Note that we still have the sane file in the full backup, in case of doubt, we can compare it with this modified version:

    terre:/mnt/memdisk# dar -d full -R SRC -q DIFF /mnt/memdisk/SRC/file.txt: different file data, offset of first difference is: 0 Some file comparisons failed terre:/mnt/memdisk#

    The previous test reports that the first byte to have changed is at offset 0, thus this is not just a metadata change that lead to this warning. We can if necessary restore the sane data from the full backup.

    Rsync
    terre:/mnt/memdisk# rm -rf SRC terre:/mnt/memdisk# mkdir SRC terre:/mnt/memdisk# echo "Hello World!" > SRC/file.txt terre:/mnt/memdisk# rsync -arvHAX SRC DST sending incremental file list created directory DST SRC/ SRC/file.txt sent 146 bytes received 65 bytes 422.00 bytes/sec total size is 13 speedup is 0.06 terre:/mnt/memdisk# ./hide_change SRC/file.txt terre:/mnt/memdisk# rsync -arvHAX SRC DST sending incremental file list sent 83 bytes received 13 bytes 192.00 bytes/sec total size is 13 speedup is 0.14 terre:/mnt/memdisk# cat SRC/file.txt Lello World! terre:/mnt/memdisk# cat DST/SRC/file.txt Hello World! terre:/mnt/memdisk#

    rsync has not reported the problem, but hopefully it has not synchronized the backup, thus we end in a sane version in the DST backup directory though, as user is not aware of this potential risk, the virus/ransomware can spread silently.

    Tar
    terre:/mnt/memdisk# rm -rf SRC terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir SRC terre:/mnt/memdisk# echo "Hello World!" > SRC/file.txt terre:/mnt/memdisk# tar --listed-incremental=snapshot.file -cf full.tar SRC terre:/mnt/memdisk# ./hide_change SRC/file.txt terre:/mnt/memdisk# tar --listed-incremental=snapshot.file -cvf diff.tar SRC SRC/ SRC/file.txt terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xf ../full.tar terre:/mnt/memdisk/DST# cat SRC/file.txt Hello World! terre:/mnt/memdisk/DST# tar -xf ../diff.tar terre:/mnt/memdisk/DST# cat SRC/file.txt Lello World! terre:/mnt/memdisk/DST#

    As seen above tar does not see any problem, but the file has been resaved as a whole (while its last modification time was unchanged) which lead to corrupt the new backup with potential harmful data. The good point is that you have still the full backup with the sane data. But at a next backup cycle, as you were not notified of the risk, you will lose it and keep only the corrupted version of this file.

    Snapshot

    Dar
    terre:/mnt/memdisk# dar -c full -z6 -R /usr --on-fly-isolate snapshot -------------------------------------------- 267245 inode(s) saved including 23 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 267245 -------------------------------------------- EA saved for 5 inode(s) FSA saved for 237962 inode(s) -------------------------------------------- Now performing on-fly isolation... terre:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 4006060941 Nov 12 15:34 full.1.dar -rw-r--r-- 1 root root 6662595 Nov 12 15:34 snapshot.1.dar terre:/mnt/memdisk# dar -C recreated_snapshot -A full -z6 -q terre:/mnt/memdisk# ls -al *.dar -rw-r--r-- 1 root root 4006060941 Nov 12 15:34 full.1.dar -rw-r--r-- 1 root root 7907094 Nov 12 16:33 recreated_snapshot.1.dar -rw-r--r-- 1 root root 6662595 Nov 12 15:34 snapshot.1.dar terre:/mnt/memdisk# dar -c diff -A snapshot -R /usr -z -------------------------------------------- 23 inode(s) saved including 23 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 267222 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 267245 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# terre:/mnt/memdisk# ls -lh *.dar -rw-r--r-- 1 root root 25M Nov 12 16:37 diff.1.dar -rw-r--r-- 1 root root 3.8G Nov 12 15:34 full.1.dar -rw-r--r-- 1 root root 7.6M Nov 12 16:33 recreated_snapshot.1.dar -rw-r--r-- 1 root root 6.4M Nov 12 15:34 snapshot.1.dar terre:/mnt/memdisk# dar -c diff2 -A recreated_snapshot -R /usr -z -------------------------------------------- 23 inode(s) saved including 23 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 267222 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 267245 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# dar -c snapshot_alone -A + -R /usr -z -------------------------------------------- 23 inode(s) saved including 23 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 267222 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 267245 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# touch /usr/local/src terre:/mnt/memdisk# dar -c faked_diff -A snapshot -R /usr --dry-run -q -vt Adding folder to archive: /usr/local/src Saving Filesystem Specific Attributes for /usr/local/src terre:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 25537139 Nov 12 16:37 diff.1.dar -rw-r--r-- 1 root root 25537139 Nov 12 16:39 diff2.1.dar -rw-r--r-- 1 root root 4006060941 Nov 12 15:34 full.1.dar -rw-r--r-- 1 root root 7907094 Nov 12 16:33 recreated_snapshot.1.dar -rw-r--r-- 1 root root 6662595 Nov 12 15:34 snapshot.1.dar -rw-r--r-- 1 root root 25537142 Nov 12 16:44 snapshot_alone.1.dar terre:/mnt/memdisk#

    As seen above,a snapshot can be created:

    • as part of a backup process (full, differential, incremental or even decremental backup) (--on-fly-isolate)
    • from a existing backup (-C)
    • alone by a dedicated operation (-A +)

    root@terre:/mnt/memdisk# ls -l full.1.dar -rw-r--r-- 1 root root 3895581703 Nov 29 21:53 full.1.dar root@terre:/mnt/memdisk# dar -l full -q FATAL error, aborting operation: Cannot open catalogue: unknown compression root@terre:/mnt/memdisk# !bitflip bitflip 31124653000 full.1.dar root@terre:/mnt/memdisk# dar -l full -q Archive version format : 11 Compression algorithm used : gzip Compression block size used : 0 Symmetric key encryption used : none Asymmetric key encryption used : none Archive is signed : no Sequential reading marks : present User comment : N/A Catalogue size in archive : 7799028 bytes Archive is composed of 1 file(s) File size: 3895581703 bytes The global data compression ratio is: 51% CATALOGUE CONTENTS : total number of inode : 263480 fully saved : 263480 binay delta patch : 0 inode metadata only : 0 distribution of inode(s) - directories : 18080 - plain files : 216142 - symbolic links : 29258 - named pipes : 0 - unix sockets : 0 - character devices : 0 - block devices : 0 - Door entries : 0 hard links information - number of inode with hard link : 11 - number of reference to hard linked inodes: 34 destroyed entries information 0 file(s) have been record as destroyed since backup of reference root@terre:/mnt/memdisk# bitflip 31124653000 full.1.dar root@terre:/mnt/memdisk# dar -t full Final memory cleanup... FATAL error, aborting operation: Cannot open catalogue: unknown compression root@terre:/mnt/memdisk# dar -t full -A snapshot -------------------------------------------- 263503 item(s) treated 0 item(s) with error 0 item(s) ignored (excluded by filters) -------------------------------------------- Total number of items considered: 263503 -------------------------------------------- root@terre:/mnt/memdisk# root@terre:/mnt/memdisk# dar -t full --sequential-read A problem occurred while reading this archive contents: Cannot open catalogue: unknown compression -------------------------------------------- 263503 item(s) treated 0 item(s) with error 0 item(s) ignored (excluded by filters) -------------------------------------------- Total number of items considered: 263503 -------------------------------------------- root@terre:/mnt/memdisk#

    Once created a snapshot can be used:

    • to create a differential or incremental backup
    • to list the files that would be saved (thus which have changed, were added or have been removed) thus all changes since the time the snapshot was made (using the --dry-run option)
    • rescue a corrupted backup when the corruption falled into the backup table of content located at the end of the dar backup (only if it has been created based on the backup to rescue either using -C option maybe long after the backup was made or using --on-fly-isolate at the same time the backup was created.
    Note, as shown above, a table of content (aka "catalogue") corruption, can also partially be recovered using the --sequential-read mode, it will just not let dar remove files that were removed since the reference backup was made (this does thus not concern full backups, as here).

    Rsync

    This feature is not supported by rsync.

    Tar

    tar can generate snapshot:

    • alone redirecting the backup output to /dev/null
    • as part of a backup process (in fact tar cannot do else)

    terre:/mnt/memdisk# tar --listed-incremental=snapshot.file -czf full.tar.gz /usr tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets terre:/mnt/memdisk# ls -l snapshot.file -rw-r--r-- 1 root root 6288644 Nov 12 15:12 snapshot.file terre:/mnt/memdisk# cp snapshot.file snapshot.file.ref tar --listed-incremental=snapshot.file -cvf /dev/null /usr /usr/ /usr/bin/ /usr/games/ /usr/include/ /usr/include/X11/ /usr/include/X11/bitmaps/ /usr/include/arpa/ /usr/include/asm-generic/ /usr/include/attr/ /usr/include/c++/ /usr/include/c++/ [...] /usr/share/zoneinfo/right/Canada/ /usr/share/zoneinfo/right/Chile/ /usr/share/zoneinfo/right/Etc/ /usr/share/zoneinfo/right/Europe/ /usr/share/zoneinfo/right/Indian/ /usr/share/zoneinfo/right/Mexico/ /usr/share/zoneinfo/right/Pacific/ /usr/share/zoneinfo/right/SystemV/ /usr/share/zoneinfo/right/US/ /usr/share/zsh/ /usr/share/zsh/site-functions/ /usr/share/zsh/vendor-completions/ /usr/src/ terre:/mnt/memdisk# ls -l sna snapshot.file snapshot.file.ref terre:/mnt/memdisk# ls -l snapshot.file* -rw-r--r-- 1 root root 6288644 Nov 12 15:20 snapshot.file -rw-r--r-- 1 root root 6288644 Nov 12 15:18 snapshot.file.ref terre:/mnt/memdisk#

    If a snapshot can be used (and is in fact required) to make a differential backup, it cannot really be used to see the difference a current living filesystem has with a given snapshot. Worse, doing so modifies the snapshot, so you have first to make a copy to not screw up your backup process. Worse, if incremental backup fails and you have not created a copy of the backup, your snapshot being modified you will mostly have to remake the whole backup process from the full backup to be sure to not miss backing up some modified files. Same thing if you lose by mistake the snapshot file.

    On-fly hashing

    Dar
    terre:/mnt/memdisk# dar -c backup -R /usr -g usr/bin -z6 --hash sha1 -------------------------------------------- 0 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 8 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 8 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# ls -l *.dar* -rw-r--r-- 1 root root 171 Nov 12 17:22 backup.1.dar -rw-r--r-- 1 root root 55 Nov 12 17:22 backup.1.dar.sha1 terre:/mnt/memdisk# sha1sum -c backup.1.dar.sha1 backup.1.dar: OK terre:/mnt/memdisk#
    Rsync

    not supported by rsync

    Tar

    not supported by tar

    Custom command during operation

    As an example (but there is much more thing that can be done), we take the case of a automounted directory. Such type of volume is mounted only when used, if not used no mount point directory shows and unless you know it exists, no backup of its content is performed. The idea, is when entering the parent directory at backup process to trigger the mount point for the backup to include them.

    Dar
    terre:/mnt/memdisk# cat /etc/auto.mnt Espace -defaults,relatime,acl,bg,rsize=8192,wsize=8192 nfs.systeme-solaire.espace:/mnt/Externe/Espace Commun -defaults,relatime,acl,bg,rsize=8192,wsize=8192,ro nfs.systeme-solaire.espace:/mnt/Externe/Commun Backup -defaults,relatime,acl,bg,rsize=8192,wsize=8192,ro nfs.systeme-solaire.espace:/mnt/Backup terre:/mnt/memdisk# ls -l /mnt/Externe/ total 4 drwxr-xr-x 7 root root 4096 Jul 14 17:58 Espace terre:/mnt/memdisk# dar -c backup -R / -g /mnt -q terre:/mnt/memdisk# dar -l backup [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [Saved][-] [-L-][ ][ ] drwxr-xr-x 0 0 0 Wed Oct 21 18:17:07 2020 mnt [Saved][-] [-L-][ ][ ] drwxr-xr-x 1000 1002 0 Mon Nov 9 11:56:54 2020 mnt/localdisk [Saved][-] [---][-----][ ] lrwxrwxrwx 0 0 0 Thu Aug 15 23:29:46 2019 mnt/Backup [Saved][-] [---][ ][ ] drwxr-xr-x 0 0 0 Thu Nov 12 17:42:11 2020 mnt/Externe [Saved][-][Saved][---][ ][ ] drwxr-xr-x 0 0 0 Tue Jul 14 17:58:57 2020 mnt/Externe/Espace terre:/mnt/memdisk# terre:/mnt/memdisk# rm backup.1.dar terre:/mnt/memdisk# dar -c backup -R / -g mnt -q '-<' mnt '-=' 'file %p/Externe/Backup %p/Externe/Commun' /mnt/Externe/Backup: directory /mnt/Externe/Commun: directory terre:/mnt/memdisk# dar -l backup [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [Saved][-] [-L-][ ][ ] drwxr-xr-x 0 0 0 Wed Oct 21 18:17:07 2020 mnt [Saved][-] [-L-][ ][ ] drwxr-xr-x 1000 1002 0 Mon Nov 9 11:56:54 2020 mnt/localdisk [Saved][-] [---][-----][ ] lrwxrwxrwx 0 0 0 Thu Aug 15 23:29:46 2019 mnt/Backup [Saved][-] [---][ ][ ] drwxr-xr-x 0 0 0 Thu Nov 12 18:01:41 2020 mnt/Externe [Saved][-][Saved][---][ ][ ] drwxr-x--- 993 1002 0 Wed Nov 11 10:21:55 2015 mnt/Externe/Commun [Saved][-][Saved][---][ ][ ] drwxr-xr-x 0 0 0 Sun Sep 13 12:22:24 2020 mnt/Externe/Backup [Saved][-][Saved][---][ ][ ] drwxr-xr-x 0 0 0 Tue Jul 14 17:58:57 2020 mnt/Externe/Espace terre:/mnt/memdisk# ls -l /mnt/Externe/ total 12 drwxr-xr-x 9 root root 4096 Sep 13 12:22 Backup drwxr-x--- 4 commun maison 4096 Nov 11 2015 Commun drwxr-xr-x 7 root root 4096 Jul 14 17:58 Espace terre:/mnt/memdisk#

    In the previous example we see that the /mnt/Externe directory is a mount point containing three auto-mounted volumes: Espace, Commun and Backup. At first only Espace was mounted. Performing a backup without care will skip the two other directories.

    In a second time, thanks to the -< and -= options, we instructed dar to run the file command on the two missing directories when entering /mnt. As a result, we now see both of them in the backup. We could do that before executing the backup, but as the backup may include many other directories the time between such operation done before starting the backup and the time the backup finally saves the automount point at /mnt/Externe may exceed the automount timeout leading them to be unmounted and disappear before the backup process reaches them.

    terre:/mnt/memdisk# dar -c backup -R / -g usr/bin --hash sha512 -s 100M -q terre:/mnt/memdisk# ls -l backup.* -rw-r--r-- 1 root root 104857600 Nov 12 18:30 backup.1.dar -rw-r--r-- 1 root root 143 Nov 12 18:30 backup.1.dar.sha512 -rw-r--r-- 1 root root 104857600 Nov 12 18:30 backup.2.dar -rw-r--r-- 1 root root 143 Nov 12 18:30 backup.2.dar.sha512 -rw-r--r-- 1 root root 104857600 Nov 12 18:30 backup.3.dar -rw-r--r-- 1 root root 143 Nov 12 18:30 backup.3.dar.sha512 -rw-r--r-- 1 root root 63577207 Nov 12 18:30 backup.4.dar -rw-r--r-- 1 root root 143 Nov 12 18:30 backup.4.dar.sha512 terre:/mnt/memdisk# dar -t backup -E 'sha512sum -c %p/%b.%N.%e.sha512' backup.4.dar: OK backup.1.dar: OK backup.2.dar: OK backup.3.dar: OK backup.4.dar: OK -------------------------------------------- 2594 item(s) treated 0 item(s) with error 0 item(s) ignored (excluded by filters) -------------------------------------------- Total number of items considered: 2594 -------------------------------------------- terre:/mnt/memdisk#

    In this example, this we used slicing with on-fly hashing which generated for each slice the corresponding sha512 hash file. Then we tested the archive content and at the same time the hash files thanks to the -E option. Of course any user command or shell or python script, can be used instead, and for backup, restoration, testing, snashotting,...

    Rsync

    not supported by rsync

    Tar

    tar has the -F option to launch a command after each tape, but it is only available with multi-volume tar archive, which in turn cannot be used with compression. Thus we won't test it, as it is quite restrictive and does not match any common use cases.

    Dry-run execution

    Dar
    terre:/mnt/memdisk/A# ls -l total 0 terre:/mnt/memdisk/A# dar -c backup -R / -g usr/bin --dry-run -------------------------------------------- 2594 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 34 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2628 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2154 inode(s) -------------------------------------------- terre:/mnt/memdisk/A# ls -l total 0 terre:/mnt/memdisk/A#
    Rsync
    terre:/mnt/memdisk# rsync -arHAX --dry-run /usr/bin DST terre:/mnt/memdisk# ls -l DST ls: cannot access 'DST': No such file or directory terre:/mnt/memdisk#
    Tar

    does not seem supported by tar

    User message within backup

    Dar
    terre:/mnt/memdisk# dar -c backup --user-comment "passphrase is the usual one. Archive was made on %d on host %h" -R / -g usr/bin -K camellia: -zxz -s 100M Archive backup requires a password: Please confirm your password: -------------------------------------------- 2594 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 34 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2628 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2154 inode(s) -------------------------------------------- terre:/mnt/memdisk# dar -l backup -aheader Archive version format : 11 Compression algorithm used : xz Compression block size used : 0 Symmetric key encryption used : camellia 256 Asymmetric key encryption used : none Archive is signed : no Sequential reading marks : present User comment : passphrase is the usual one. Archive was made on Thu Nov 12 18:57:35 2020 on host terre KDF iteration count : 10000 KDF hash algorithm : argon2 Salt size : 32 bytes Final memory cleanup... FATAL error, aborting operation: header only mode asked terre:/mnt/memdisk#

    The use of the -aheader let one see the archive header that is always in clear-text. The usual listing operation provides some additional informations from the ciphered table of content and thus in that context requires the passphrase:

    terre:/mnt/memdisk# dar -l backup -q Archive backup requires a password: Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted Archive version format : 11 Compression algorithm used : xz Compression block size used : 0 Symmetric key encryption used : camellia 256 Asymmetric key encryption used : none Archive is signed : no Sequential reading marks : present User comment : passphrase is the usual one. Archive was made on Thu Nov 12 18:57:35 2020 on host terre KDF iteration count : 10000 KDF hash algorithm : argon2 Salt size : 32 bytes Catalogue size in archive : 78268 bytes Archive is composed of 2 file(s) File size : 104857600 bytes Last file size : 17168696 bytes Archive total size is : 122026296 bytes The global data compression ratio is: 72% CATALOGUE CONTENTS : total number of inode : 2589 fully saved : 2589 binay delta patch : 0 inode metadata only : 0 distribution of inode(s) - directories : 2 - plain files : 2152 - symbolic links : 435 - named pipes : 0 - unix sockets : 0 - character devices : 0 - block devices : 0 - Door entries : 0 hard links information - number of inode with hard link : 5 - number of reference to hard linked inodes: 10 destroyed entries information 0 file(s) have been record as destroyed since backup of reference terre:/mnt/memdisk#
    Rsync

    not supported by rsync

    Tar

    not supported by tar

    backup sanity test

    Dar
    terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -zlz4 -------------------------------------------- 2594 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 34 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2628 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2154 inode(s) -------------------------------------------- terre:/mnt/memdisk# dar -t backup -------------------------------------------- 2594 item(s) treated 0 item(s) with error 0 item(s) ignored (excluded by filters) -------------------------------------------- Total number of items considered: 2594 -------------------------------------------- terre:/mnt/memdisk#
    Rsync

    It does not seems possible to let rsync check that the target or destination directory is sane and usuable. All operation modify the destination file or save modified files in either the destination directory (the backup) or an alternate directory (--compare-dest option).

    Tar
    terre:/mnt/memdisk# rm -rf backup.tar.gz terre:/mnt/memdisk# tar -czf backup.tar.gz /usr/bin tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets terre:/mnt/memdisk# tar -tzf backup.tar.gz usr/bin/ usr/bin/bitmap usr/bin/dot usr/bin/indi_usbdewpoint usr/bin/ruby2.5 usr/bin/pod2man usr/bin/iptables-xml usr/bin/knotify4 usr/bin/fakeroot usr/bin/xclock [...] /bin/traceproto usr/bin/ofm2opl usr/bin/akonadi_archivemail_agent usr/bin/resizecons usr/bin/rletopnm usr/bin/dh_install usr/bin/updvitomp usr/bin/h2xs usr/bin/xmessage terre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#

    Comparing with original data

    Dar
    terre:/mnt/memdisk# dar -c backup -R SRC -q terre:/mnt/memdisk# dar -d backup -R SRC -------------------------------------------- 2594 item(s) treated 0 item(s) do not match those on filesystem 0 item(s) ignored (excluded by filters) -------------------------------------------- Total number of items considered: 2594 -------------------------------------------- terre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#
    Rsync

    Does not seems supported by rsync

    Tar
    terre:/mnt/memdisk/SRC# tar -czf ../backup.tar.gz . terre:/mnt/memdisk/SRC# tar -dzf ../backup.tar.gz terre:/mnt/memdisk/SRC# echo $? 0 terre:/mnt/memdisk/SRC#

    Tunable verbosity

    Dar
    terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -q terre:/mnt/memdisk# rm backup.1.dar terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -------------------------------------------- 2594 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 34 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2628 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2154 inode(s) -------------------------------------------- terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -vm Arguments read from /usr/local/etc/darrc : Creating low layer: Writing archive into a plain file object... Adding a new layer on top: Caching layer for better performances... Writing down the archive header... Adding a new layer on top: Escape layer to allow sequential reading... All layers have been created successfully Building the catalog object... Processing files for backup... Writing down archive contents... Closing the escape layer... Writing down the first archive terminator... Writing down archive trailer... Writing down the second archive terminator... Closing archive low layer... Archive is closed. -------------------------------------------- 2594 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 34 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2628 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2154 inode(s) -------------------------------------------- Making room in memory (releasing memory used by archive of reference)... Final memory cleanup... terre:/mnt/memdisk# rm -f backup* terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -vt -q Adding folder to archive: /usr Saving Filesystem Specific Attributes for /usr Adding folder to archive: /usr/bin Saving Filesystem Specific Attributes for /usr/bin Adding file to archive: /usr/bin/bitmap Saving Filesystem Specific Attributes for /usr/bin/bitmap [...] Saving Filesystem Specific Attributes for /usr/bin/dh_install Adding symlink to archive: /usr/bin/updvitomp Adding file to archive: /usr/bin/h2xs Saving Filesystem Specific Attributes for /usr/bin/h2xs Adding file to archive: /usr/bin/xmessage Saving Filesystem Specific Attributes for /usr/bin/xmessage terre:/mnt/memdisk# rm -f backup* terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -vd Inspecting directory /root Inspecting directory /bin Inspecting directory /sbin Inspecting directory /tmp Inspecting directory /sys Inspecting directory /lib [...] Inspecting directory /var Inspecting directory /proc Inspecting directory /dev Inspecting directory /etc Inspecting directory /media Inspecting directory /run terre:/mnt/memdisk# terre:/mnt/memdisk# rm -f backup.* terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -vf -q Finished Inspecting directory /usr/bin , saved 408 Mio, compression ratio 13% Finished Inspecting directory /usr , saved 408 Mio, compression ratio 13% terre:/mnt/memdisk# terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -vmasks -q directory tree filter: AND | OR | | Is subdir of: /usr/bin [case sensitive] | +-- +-- filename filter: AND | TRUE +-- EA filter: AND | TRUE +-- Compression filter: TRUE terre:/mnt/memdisk#

    dar has several options to define which type of message to show or not to show: -v, -vs, -vt, -vd, -vf, -vm, -vmasks, -q. They can be combined.

    Rsync
    terre:/mnt/memdisk# rsync -arHAX /usr/bin DST terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# rsync -arHAX -v /usr/bin DST sending incremental file list created directory DST bin/ bin/2to3-2.7 bin/411toppm bin/7z bin/7za bin/7zr bin/FvwmCommand [...] bin/zstdmt -> zstd bin/perl => bin/perl5.28.1 bin/perlbug => bin/perlthanks bin/python3.7 => bin/python3.7m bin/pkg-config => bin/x86_64-pc-linux-gnu-pkg-config bin/unzip => bin/zipinfo sent 437,298,617 bytes received 42,381 bytes 174,936,399.20 bytes/sec total size is 445,394,557 speedup is 1.02 root@terre:/mnt/memdisk# rsync -arHAX --info=progress2 /usr/bin DST 437,083,500 98% 128.42MB/s 0:00:03 (xfr#2152, to-chk=0/2593) root@terre:/mnt/memdisk#

    -v option leads to a more verbose output, while -q remove the non error messages. Using both at the same time seems not to be different than using -q alone. However rsync has a very rich set of additional options like --info, --debug that can be added on top.

    Tar
    terre:/mnt/memdisk# tar -czf backup.tar.gz /usr/bin tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets terre:/mnt/memdisk# rm backup.tar.gz terre:/mnt/memdisk# tar -v -czf backup.tar.gz /usr/bin tar: Removing leading `/' from member names /usr/bin/ /usr/bin/bitmap /usr/bin/dot /usr/bin/indi_usbdewpoint /usr/bin/ruby2.5 /usr/bin/pod2man /usr/bin/iptables-xml /usr/bin/knotify4 [...] /usr/bin/traceproto /usr/bin/ofm2opl /usr/bin/akonadi_archivemail_agent /usr/bin/resizecons /usr/bin/rletopnm /usr/bin/dh_install /usr/bin/updvitomp /usr/bin/h2xs /usr/bin/xmessage terre:/mnt/memdisk#

    tar only provides the -v option to increase verbosity.

    Modify Backup content

    We will perform two types of tests:

    • remove one or several files from an existing backup without having to make a new backup process
    • add some forgotten files to an existing backup without performing a new full backup process

    Dar
    terre:/mnt/memdisk# dar -c backup -R / -g usr/bin -z6 -q terre:/mnt/memdisk# dar -l backup -g usr/bin/emacs-gtk [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [Saved][-] [-L-][ 64%][ ] drwxr-xr-x root root 408 Mio Sun Jun 2 23:25:09 2019 usr [Saved][-] [-L-][ 64%][ ] drwxr-xr-x root root 408 Mio Sun Nov 8 13:43:58 2020 usr/bin [Saved][ ] [-L-][ 90%][X] -rwxr-xr-x root root 38 Mio Thu Sep 5 04:35:24 2019 usr/bin/emacs-gtk terre:/mnt/memdisk# dar -A backup -+ without-emacs -ak -P usr/bin/emacs-gtk -vs -q Skipping file: <ROOT>/usr/bin/emacs-gtk terre:/mnt/memdisk# dar -l without-emacs -g usr/bin/emacs-gtk [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [Saved][-] [-L-][ 62%][ ] drwxr-xr-x root root 370 Mio Sun Jun 2 23:25:09 2019 usr [Saved][-] [-L-][ 62%][ ] drwxr-xr-x root root 370 Mio Sun Nov 8 13:43:58 2020 usr/bin terre:/mnt/memdisk#rm backup.* terre:/mnt/memdisk#mv without-emacs.1.dar backup.1.dar terre:/mnt/memdisk#

    dar does not modify a existing backup but creates a copy of it with the requested files or directory removed. The process can be quick even with compression thanks to the -ak option that avoid uncompressing and recompressing file that are kept. Before removing the old backup you can test the sanity of the new generated one.

    terre:/mnt/memdisk# dar -c emacs -R / -g usr/bin/emacs-gtk -z6 -q terre:/mnt/memdisk# dar -l emacs [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [Saved][-] [-L-][ 90%][ ] drwxr-xr-x root root 38 Mio Sun Jun 2 23:25:09 2019 usr [Saved][-] [-L-][ 90%][ ] drwxr-xr-x root root 38 Mio Sun Nov 8 13:43:58 2020 usr/bin [Saved][ ] [-L-][ 90%][X] -rwxr-xr-x root root 38 Mio Thu Sep 5 04:35:24 2019 usr/bin/emacs-gtk terre:/mnt/memdisk# dar -A backup -@ emacs -+ with-emacs -ak -------------------------------------------- 2594 inode(s) added to archive with 10 hard link(s) recorded 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted -------------------------------------------- EA saved for 3 inode(s) FSA saved for 2159 inode(s) -------------------------------------------- Total number of inode(s) considered: 2594 -------------------------------------------- terre:/mnt/memdisk# dar -l with-emacs -g usr/bin/emacs-gtk [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | Size | Date | filename --------------------------------+------------+-------+-------+---------+-------------------------------+------------ [Saved][-] [-L-][ 64%][ ] drwxr-xr-x root root 408 Mio Sun Jun 2 23:25:09 2019 usr [Saved][-] [-L-][ 64%][ ] drwxr-xr-x root root 408 Mio Sun Nov 8 13:43:58 2020 usr/bin [Saved][ ] [-L-][ 90%][X] -rwxr-xr-x root root 38 Mio Thu Sep 5 04:35:24 2019 usr/bin/emacs-gtk terre:/mnt/memdisk# rm emacs.* backup.* terre:/mnt/memdisk# mv with-emacs.1.dar backup.1.dar terre:/mnt/memdisk#

    Here to add files to a existing backup we must make a small backup of these files only, then merge this backup with the backup we want to modify. Nothing of the source data is touched in this operation, is something goes wrong or if you made an error, you can fix and restart without taking the risk to lose data.

    Rsync

    The backup made by rsync is just a copy of the save files, removing a file from the backup is as simple as calling rm on that file in the repository that is considered the backup.

    While adding a new file in the backup can be done by using rsync as usual including the directory tree where this file resides.

    Tar
    terre:/mnt/memdisk# tar -czf backup.tar.gz /usr/bin tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets terre:/mnt/memdisk# tar -tvf backup.tar.gz | grep emacs-gtk -rwxr-xr-x root/root 39926024 2019-09-05 04:35 usr/bin/emacs-gtk terre:/mnt/memdisk# tar -tvf backup.tar.gz | grep emacs-gtk -rwxr-xr-x root/root 39926024 2019-09-05 04:35 usr/bin/emacs-gtk terre:/mnt/memdisk# tar --delete usr/bin/emacs-gtk -f backup.tar.gz tar: Cannot update compressed archives tar: Error is not recoverable: exiting now terre:/mnt/memdisk#

    Well, tar cannot manipulate compressed archives. What the point then to remove a file from a backup if storage space is not an issue, else, would compression be used?

    stdin/stdout backup read/write

    Dar
    terre:/mnt/memdisk# dar -c - -z6 -R SRC > backup.file -------------------------------------------- 2594 inode(s) saved including 5 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2594 -------------------------------------------- EA saved for 3 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# dar -x - --sequential-read -R DST < backup.file -------------------------------------------- 2594 inode(s) restored including 5 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 2594 -------------------------------------------- EA restored for 3 inode(s) FSA restored for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk#

    dar can read a backup from stdin and write a backup to stdout.

    Rsync

    Using stdin/stdout to send to or read from backed up data does not seems possible with rsync

    Tar
    terre:/mnt/memdisk# tar -czf - SRC > backup.file terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xzf - < ../backup.file terre:/mnt/memdisk/DST#

    tar can read a backup from stdin and write a backup to stdout.

    Remote network storage

    For Remote Network storage, if you use a personal NAS you may avoid generating ciphered backup, though you still should transfer it using a secured protocol if the underlaying network is not your own from end to end (for example a part of the path goes over Internet without IPSec or equivalent).

    Why ciphering backup if using secure transfer protocol?

    • Secure transfer avoids one to read your data in transit in particular your credentials to the remote site, but at the end the transferred data is stored in clear at the other end.
    • ciphered data avoids the owner of the remote storage to access to your data without your consent, though it does not protect one to intercept your transfer and read the credentials you used to connected to the remote storage. This one could then connect later and delete all your backup... which is surely not what you want.

    In the following we will use both: secure protocol and ciphered backup, without using local storage. We will also need compression to save precious space (usually you pay for the cloud storage you use) and maybe slicing depending on the constraints imposed by the remote storage (some provider ask you to pay an extra amount to store larger files, having slicing avoids you paying extra cost in such context). Another use case of slicing is when the file transfer protocol is not able to continue an interrupted transfer, you will then only need to restart it for the last slice, not the whole backup.

    terre:/mnt/memdisk# sftp denis@dar The authenticity of host 'dar (192.168.6.32)' can't be established. RSA key fingerprint is SHA256:KN3o/psWC512grcZ5/J5dTSg9PzIXbZAHiig/hqfkc8. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'dar,192.168.6.32' (RSA) to the list of known hosts. denis@dar's password: Connected to denis@dar. sftp> bye terre:/mnt/memdisk#
    Dar
    terre:/mnt/memdisk# dar -c sftp://denis@dar/home/denis/backup -R / -g etc -K aes: -zlz4 -s 1M Please provide the password for login denis at host dar: Archive backup requires a password: Please confirm your password: -------------------------------------------- 2360 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 27 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 2387 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 1523 inode(s) -------------------------------------------- terre:/mnt/memdisk# sftp denis@dar denis@dar's password: Connected to denis@dar. sftp> ls -l -rw-r--r-- 1 denis denis 1048576 Nov 26 17:15 backup.1.dar -rw-r--r-- 1 denis denis 1048576 Nov 26 17:15 backup.2.dar -rw-r--r-- 1 denis denis 1048576 Nov 26 17:15 backup.3.dar -rw-r--r-- 1 denis denis 474982 Nov 26 17:15 backup.4.dar sftp> bye terre:/mnt/memdisk#

    The backup results in four ciphered slices located on the remote sftp server. Let's add a -E option to see which slice are being read while testing the archive.

    terre:/mnt/memdisk# dar -t sftp://denis@dar/home/denis/backup -E "echo 'openning slice %p/%b.%N.%e'" Please provide the password for login denis at host dar: openning slice /home/denis/backup.4.dar Archive backup requires a password: Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted openning slice /home/denis/backup.1.dar openning slice /home/denis/backup.2.dar openning slice /home/denis/backup.3.dar openning slice /home/denis/backup.4.dar -------------------------------------------- 2360 item(s) treated 0 item(s) with error 0 item(s) ignored (excluded by filters) -------------------------------------------- Total number of items considered: 2360 -------------------------------------------- terre:/mnt/memdisk#

    We see that all slices have been read as expected, now let's restore /etc/fstab in the current directory and compare the restored files with the real /etc/fstab

    terre:/mnt/memdisk# dar -x sftp://denis@dar/home/denis/backup -E "echo 'openning slice %p/%b.%N.%e'" -g etc/fstab --flat Please provide the password for login denis at host dar: openning slice /home/denis/backup.4.dar Archive backup requires a password: Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted 1 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 269 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 270 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# diff fstab /etc/fstab terre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#

    As seen above only one slice (slice #4) has been necessary to restore /etc/fstab. But let's save two files, a huge one and a small one into a single sliced backup and measure the transfer time of backup and restoration of this ciphered an compressed backup through sftp. We have added public key authentification for precise time measurement:

    terre:/mnt/memdisk# sftp denis@dar Connected to denis@dar. sftp> bye terre:/mnt/memdisk# ls -l SRC total 315396 -rw------- 1 root root 322961408 Nov 26 17:28 devuan_beowulf_3.0.0_amd64-netinstall.iso -rw-r--r-- 1 root root 994 Nov 26 17:29 fstab terre:/mnt/memdisk# terre:/mnt/memdisk# time dar -c sftp://denis@dar/home/denis/backup -R SRC -z6 -K aes:hello -afile-auth -q 20.769u 2.445s 0:22.77 101.8% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# time dar -x sftp://denis@dar/home/denis/backup -R DST -K hello -afile-auth -q Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted 11.826u 4.211s 0:15.88 100.9% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk# diff -rs SRC DST Files SRC/devuan_beowulf_3.0.0_amd64-netinstall.iso and DST/devuan_beowulf_3.0.0_amd64-netinstall.iso are identical Files SRC/fstab and DST/fstab are identical terre:/mnt/memdisk# rm DST/fstab terre:/mnt/memdisk# time dar -x sftp://denis@dar/home/denis/backup -R DST -K hello -afile-auth -q -g fstab Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted 0.680u 0.012s 0:00.87 79.3% 0+0k 0+0io 0pf+0w terre:/mnt/memdisk#

    While restoring the whole backup needs 15 seconds of transfer time, restoring fstab alone requires only 0.87 second, as there is only one slice, this shows that dar is reading only the necessary part of the archive even within a slice to perform the operation.

    Rsync
    terre:/mnt/memdisk# rsync -arHAXSzq /etc denis@dar:/home/denis terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# rsync -arHAXSzq denis@dar:/home/denis/etc/fstab . terre:/mnt/memdisk# diff fstab /etc/fstab terre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#

    We can backup to a remote sftp server, we can compress the data on-fly but it is not stored compressed nor ciphered. Restoration operation is possible per file or for the whole backup

    Tar

    tar has no way to perform sftp or other secured transfer protocol, nor encryption by itself. When time comes to restore a particular file, the whole backup has to be retrieved, unciphered and uncompressed to restore even just a sigle file

    Backup Robustness

    The objective of this test is to measure the way the backup tools under test behave when the backup has been corrupted. We will here just flip one byte of the backup, at the beginning, in the middle or at the end of the backup and observe the consequences in term of ability to restore the backup.

    Dar
    terre:/mnt/memdisk# dar -c backup -R SRC -z6 -------------------------------------------- 74725 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 0 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 74725 -------------------------------------------- EA saved for 0 inode(s) FSA saved for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# ls -al backup* -rw-r--r-- 1 root root 219088536 Nov 17 17:45 backup.1.dar terre:/mnt/memdisk# ./hide_change backup.1.dar 1 terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# dar -x backup -R DST backup.1.dar is not a valid file (wrong magic number), please provide the good file. [return = YES | Esc = NO] Escaping... Final memory cleanup... Aborting program. User refused to continue while asking: backup.1.dar is not a valid file (wrong magic number), please provide the good file. terre:/mnt/memdisk# dar -x backup -R DST -alax LAX MODE: In spite of its name, backup.1.dar does not appear to be a dar slice, assuming a data corruption took place and continuing LAX MODE: Archive is flagged as having escape sequence (which is normal in recent archive versions). However if this is not expected, shall I assume a data corruption occurred in this field and that this flag should be ignored? (If unsure, refuse) [return = YES | Esc = NO] Escaping... -------------------------------------------- 74725 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 74725 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# diff -r SRC DST terre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#

    Modifying the first bit dar has seen the corruption. We can use the lax mode (-alax option) to bypass this corruption and then the restoration proceeds normally. We can try a bit further for example somewhere in the middle of the archive, thus at offset 876354144 (half of the size expressed in bit, not byte):

    terre:/mnt/memdisk# ls -l backup* -rw-r--r-- 1 root root 219088536 Nov 17 17:45 backup.1.dar terre:/mnt/memdisk# hide_change backup.1.dar 876354144 terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# dar -x backup -R DST Error while restoring /mnt/memdisk/DST/linux-5.9.8/drivers/mtd/spi-nor/core.c : compressed data CRC error -------------------------------------------- 74724 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 1 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 74725 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- Final memory cleanup... All files asked could not be restored terre:/mnt/memdisk# diff -rq SRC DST Files SRC/linux-5.9.8/drivers/mtd/spi-nor/core.c and DST/linux-5.9.8/drivers/mtd/spi-nor/core.c differ terre:/mnt/memdisk#

    One file could not be restored properly as reported by dar, but all other files could be and are identical to their respective originals. Let's modifying the last bit for completness:

    terre:/mnt/memdisk# ls -al *.dar -rw-r--r-- 1 root root 219088537 Nov 17 17:45 backup.1.dar terre:/mnt/memdisk# cp backup.1.dar backop.1.dar terre:/mnt/memdisk# hide_change backup.1.dar 1752708287 terre:/mnt/memdisk# ls -al *.dar -rw-r--r-- 1 root root 219088536 Nov 17 17:58 backop.1.dar -rw-r--r-- 1 root root 219088536 Nov 17 17:58 backup.1.dar terre:/mnt/memdisk# diff backup.1.dar backop.1.dar Binary files backup.1.dar and backop.1.dar differ terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# dar -x backup -R DST -------------------------------------------- 74725 inode(s) restored including 0 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 74725 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- terre:/mnt/memdisk# terre:/mnt/memdisk# diff -rq SRC DST terre:/mnt/memdisk# echo $? 0 terre:/mnt/memdisk#

    By chance it did not affected the ability to restore the backup. However if it ever had, we have several fallbacks: the --sequential-read mode, the use a already created snapshot (aka isolated catalogue) as seen about the snapshot feature to backup the internal table of content, or as last resort the -alax option eventually combined with the --sequential-read mode and a backup snapshot.

    Rsync
    terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# rsync -arHAXS SRC/* DST terre:/mnt/memdisk# ls -al DST total 0 drwxr-xr-x 3 root root 60 Nov 17 18:07 . drwxrwxrwt 4 root root 140 Nov 17 18:07 .. drwxrwxr-x 24 root root 740 Nov 10 21:16 linux-5.9.8 terre:/mnt/memdisk# diff -rq SRC DST terre:/mnt/memdisk# ./hide_change DST/linux-5.9.8/README 10 terre:/mnt/memdisk# diff -rq SRC DST Files SRC/linux-5.9.8/README and DST/linux-5.9.8/README differ terre:/mnt/memdisk# rsync -arvHAXS SRC/* DST sending incremental file list sent 1,254,054 bytes received 5,215 bytes 839,512.67 bytes/sec total size is 954,980,692 speedup is 758.36 terre:/mnt/memdisk# diff -rq SRC DST Files SRC/linux-5.9.8/README and DST/linux-5.9.8/README differ terre:/mnt/memdisk#

    modifying the backup (the directory we sync with), rsync does not report any difference and the backup stay corrupted.

    Tar
    terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# tar -czf backup.tar.gz SRC terre:/mnt/memdisk# ls -l backup* -rw-r--r-- 1 root root 183659664 Nov 17 18:11 backup.tar.gz terre:/mnt/memdisk# ./hide_change backup.tar.gz 1 terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xzf ../backup.tar.gz gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now terre:/mnt/memdisk/DST# terre:/mnt/memdisk/DST# find . -ls 1720964 0 drwxr-xr-x 2 root root 40 Nov 17 18:56 . terre:/mnt/memdisk/DST#

    Modifying the first byte leads to a completely unusable backup. Nothing got restored at all. Let's see what going on when modifying a single bit in the middle of the backup:

    terre:/mnt/memdisk# ls -l backup* -rw-r--r-- 1 root root 183659664 Nov 17 18:11 backup.tar.gz terre:/mnt/memdisk# ./hide_change backup.tar.gz 734638656 terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xf ../backup.tar.gz tar: Skipping to next header gzip: stdin: invalid compressed data--crc error gzip: stdin: invalid compressed data--length error tar: Child returned status 1 tar: Error is not recoverable: exiting now terre:/mnt/memdisk/DST# terre:/mnt/memdisk/DST# diff -rq ../SRC SRC | wc -l diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/arc: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/arm: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/arm64: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/c6x: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/h8300: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/microblaze: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/mips: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/nios2: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/openrisc: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/powerpc: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/sh: No such file or directory diff: SRC/linux-5.9.8/scripts/dtc/include-prefixes/xtensa: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/copyloops/copy_mc_64.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/copyloops/copyuser_64.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/copyloops/copyuser_power7.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/copyloops/memcpy_64.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/copyloops/memcpy_power7.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/nx-gzip/include/vas-api.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/primitives/asm/asm-compat.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/primitives/asm/asm-const.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/primitives/asm/feature-fixups.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/primitives/asm/ppc_asm.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/primitives/word-at-a-time.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/stringloops/memcmp_32.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/stringloops/memcmp_64.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/stringloops/strlen_32.S: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/vphn/asm/lppaca.h: No such file or directory diff: SRC/linux-5.9.8/tools/testing/selftests/powerpc/vphn/vphn.c: No such file or directory 150 terre:/mnt/memdisk/DST# find ../SRC | wc -l 74726 terre:/mnt/memdisk/DST# find SRC | wc -l 32615 terre:/mnt/memdisk/DST#

    Only 32615 files on the 74726 that were saved could be restored. Assuming the problem is due to the fact the backup is compressed, let's see tar without compression:

    terre:/mnt/memdisk# tar -cf backup.tar SRC terre:/mnt/memdisk# ls -l backup* -rw-r--r-- 1 root root 1011312640 Nov 17 19:28 backup.tar terre:/mnt/memdisk# ./hide_change backup.tar 1 terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xf ../backup.tar tar: This does not look like a tar archive tar: Skipping to next header tar: Exiting with failure status due to previous errors terre:/mnt/memdisk/DST# diff -rq ../SRC SRC terre:/mnt/memdisk/DST# echo $? 0 terre:/mnt/memdisk/DST#

    Without compression, a tar backup is much more reliable, however we now need more than 5 times storage space to hold the backup. Let's see what happens when we modify a single bit in the midle of the backup:

    terre:/mnt/memdisk# ./hide_change backup.tar 4045250560 terre:/mnt/memdisk# rm -rf DST terre:/mnt/memdisk# mkdir DST terre:/mnt/memdisk# cd DST terre:/mnt/memdisk/DST# tar -xf ../backup.tar terre:/mnt/memdisk/DST# terre:/mnt/memdisk/DST# diff -rq ../SRC SRC Files ../SRC/linux-5.9.8/drivers/media/pci/bt8xx/bttv-cards.c and SRC/linux-5.9.8/drivers/media/pci/bt8xx/bttv-cards.c differ terre:/mnt/memdisk/DST#

    the backup restoration suceeded according to tar but the corruption has been completely ignored!!! The result is both a corrupted backup and a corrupted restored data, with no notification at all...

    Parchive

    We can increase the robustness of any file or set of files by mean of Parchive software. If its use is adapted to tar and dar it is not adapted to rsync due to the directory tree structure it uses for its backup. We will thus here measure the par2create (Parchive) execution time compared to backup time of tar and dar.

    devuan:/mnt/memdisk# mkdir SRC devuan:/mnt/memdisk# cp --preserve -r /usr SRC devuan:/mnt/memdisk# time tar -czf backup.tar.gz SRC 62.550u 3.148s 1:01.75 106.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -c backup -z6 -1 0 -at -R SRC -q 60.287u 1.152s 1:01.45 99.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# ls -l backup.* -rw-r--r-- 1 root root 601976990 Dec 1 10:48 backup.1.dar -rw-r--r-- 1 root root 588260243 Dec 1 10:47 backup.tar.gz devuan:/mnt/memdisk# time par2create -r5 -n1 -q backup.tar.gz Opening: backup.tar.gz Done 94.465u 0.535s 0:05.74 1654.8% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time par2create -r5 -n1 -q backup.1.dar Opening: backup.1.dar Done 110.048u 0.364s 0:06.19 1783.5% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    We see that the redundancy process is not negligible: for 5% of data redundancy, we need around 10% of extra CPU time. Things get worse when the size of the data to process by Parchive increases and when the disk I/O comes to play (here the /mnt/memdisk was an in-memory tmpfs filesystem). This is the case when the amount of data to backup is larger than the available RAM space, which is a quite frequent situation:

    devuan:~/tmp# du -sh SRC 27G SRC devuan:~/tmp# free -m . total used free shared buff/cache available Mem: 15776 560 433 1 14782 14823 Swap: 0 0 0 devuan:~/tmp#

    We are now placed in this context, having a 27 GiB data to backup on a machine having only around 16 GiB of RAM.

    devuan:~/tmp# time dar -c backup -z6 -at -1 0 -R SRC -q -E 'par2create -r5 -n1 -q %b.%N.%e' Opening: backup.1.dar Done 15237.777u 105.379s 36:48.53 694.7% 0+0k 158854544+57572120io 158pf+0w devuan:~/tmp#

    This execution time of around 36 mn above can be improved by using multiple slices. Choosing a slice size smaller than the available RAM let Parchive compute parity data right after each slice has been generated, while it is still in the disk cache (RAM), bypassing the corresponding disks I/O we had previously in a second time:

    devuan:~/tmp# time dar -c backup_splitted -z6 -at -1 0 -R SRC -q -E 'par2create -r5 -n1 -q %b.%N.%e' -s 1G Opening: backup_splitted.1.dar Done Opening: backup_splitted.2.dar Done [...] Opening: backup_splitted.26.dar Done Opening: backup_splitted.27.dar Done 6040.106u 49.201s 21:58.73 461.7% 0+0k 61862640+57567104io 123pf+0w devuan:~/tmp#

    The total execution time dropped to around 22 mn! Which makes a 40% time reduction. By the way, having here 27 files of 1 GiB is also easier to manipulate (file transfer, copy to removable media,...) than a huge equivalent file of 27 GiB.

    devuan:~/tmp# time tar -czf backup.tar.gz SRC 837.662u 80.776s 18:26.08 83.0% 0+0k 54926128+54799696io 6pf+0w devuan:~/tmp# time par2create -r5 -n1 -q backup.tar.gz Opening: backup.tar.gz Done 13352.393u 71.390s 18:24.34 1215.5% 0+0k 95000064+2772144io 9pf+0w devuan:~/tmp#

    We get the same execution time (18 mn) for both tar and par2 for thus a total of 36 mn. This same time for both software while the real CPU usage is much more important for par2, clearly shows that the slowest operation was the disk I/O. Else the overall time of the operation is similar to what we say with dar above, except that we cannot use multi-volume to speed up the operation as we did with dar: tar is not able to compress *and* produce multi-volume backup: What we would gain on one side would be lost on the other side...

    Execution Performance

    In order to compare performance in a fair manner, we have to take into consideration that some CPU intensive features are not implement by all softwares or have different default values:

    • The default compression level differs: dar uses level 9 by default while rsync and tar use level 6 which is faster. Only dar and rsync seems able let the user set this value, so we will have to manually set dar to use level 6 too: -z6 option
    • the disk usage optimization is not supported by tar so we will not activate sparse file detection and optimization (for rsync no -S option) and disable it for dar (activated by default): -1 0 option
    • dar spends non negligible CPU cycle to duplicate metadata along the backup, this is one of the root of exclusive robusness brought by dar backups, but this is an optional feature. We will disable it using the -at option
    • rsync and dar calculate checksum on the saved data, while tar completely skips this data protection. Unfortunately this is not possible to disable for both, thus tar will have a speed advantage thanks to this difference.

    For the data reduction, we will first compare with the same feature set, then activate all specific features each software can leverage to improve data reduction, and will measure the execution time impact.

    All performance aspects are not interesting in all use cases. We can distinguish two main types of use cases:

    • A first set of tests will cover the copy operations, they will not use any compression, focusing only on execution time.
    • A second set of tests will cover the backup operations, use compression and focus on the data reduction mainly but also on the execution time.

    When using rsync as a backup tool (at the opposite of a copy operation), we assume the remote (or local) copy is the backup, and thus restoring implies syncing back this remote (or local) copy to the place the original data was located and has been lost.

    To prepare the data under test we used:

    • a big ISO file
    • a full linux system

    All data to backup or to copy has been stored in a tmpfs, which is also the destination of the created backups and restored data. The swap has been disabled to avoid any disk I/O penalty, in the intention to provide a fair comparison environment (avoiding disk cache variable performance).

    devuan:/mnt/memdisk# free total used free shared buff/cache available Mem: 16155172 691764 151608 8780152 15311800 6299316 Swap: 0 0 0 devuan:/mnt/memdisk# df -h . Filesystem Size Used Avail Use% Mounted on tmpfs 14G 4.0K 14G 1% /mnt/memdisk devuan:/mnt/memdisk#

    To prepare the Linux system under backup we installed the Devuan system a few days before in a VM. On day D, a full backup has been executed, we updated/upgraded the system using the disto package manager and we made a differential backup based on the first one, both backup being wrote to the testing machine (bare-metal server) that was used for the performance tests:

    root@Georges:~# dar -c sftp://denis@10.13.30.163/home/denis/tmp/full -zlz4 -R / -M -D --hash sha1 -afile-auth -C cat_full root@Georges:~# apt-get update [...] root@Georges:~# apt-get upgrade [...] root@Georges:~# dar -c sftp://denis@10.13.30.163/home/denis/tmp/diff -A cat_full -zlz4 -R / -M -D --hash sha1 -afile-auth root@Georges:~#

    Back on the testing host (the bare-metal server at 10.13.30.163) we restored the data for the performance test, the following way: excluding FSA and EA to avoid a tone of warning as those are not supported on tmpfs filesystem:

    devuan:/mnt/memdisk# mkdir state-1 devuan:/mnt/memdisk# mkdir state-2 devuan:/mnt/memdisk# dar -x ~denis/tmp/full -R state-1 --fsa-scope none -u "*" -------------------------------------------- 136836 inode(s) restored including 27 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 136836 -------------------------------------------- EA restored for 3 inode(s) FSA restored for 0 inode(s) -------------------------------------------- devuan:/mnt/memdisk# dar -x ~denis/tmp/full -R state-2 --fsa-scope none -u "*" -------------------------------------------- 136836 inode(s) restored including 27 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 136836 -------------------------------------------- EA restored for 3 inode(s) FSA restored for 0 inode(s) -------------------------------------------- devuan:/mnt/memdisk# dar -x ~denis/tmp/diff -R state-2 --fsa-scope none -u "*" -w -------------------------------------------- 568 inode(s) restored including 0 hard link(s) 136670 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 137238 -------------------------------------------- EA restored for 0 inode(s) FSA restored for 0 inode(s) -------------------------------------------- devuan:/mnt/memdisk#

    This leads us to have two directories state-1 and state-2 corresponding to the state of the Devuan machine has two days ago and today respectively.

    Performance of copy operations

    To perform the copy operation, we have decomposed the operations to precisely measure the execution time. We could have decided to pipe the backup to a second instance of the backup tool restoring the data (tar and dar only would benefit from this). But the time measurement was less easy to obtain and doing that way does not seem to provide any noticable speed improvement. The data used here in SRC1 is a single big ISO file (a Devuan installation DVD image).

    Dar
    devuan:/mnt/memdisk# time dar -c copy -1 0 -at -R SRC1 -q 1.834u 2.909s 0:04.87 97.1% 0+0k 744+0io 12pf+0w devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time dar -x copy -R DST -q 1.563u 2.836s 0:04.41 99.5% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# du -sh SRC1 DST 4.4G SRC1 4.4G DST devuan:/mnt/memdisk# diff -r SRC1 DST devuan:/mnt/memdisk# echo $? 0 devuan:/mnt/memdisk# rm -rf DST copy.1.dar devuan:/mnt/memdisk# time dar -c copy -1 0 -at -R SRC2 -q 4.739u 3.683s 0:08.44 99.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time dar -x copy -R DST -q 4.449u 3.872s 0:08.34 99.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# du -sh SRC2 DST 4.1G SRC2 4.1G DST devuan:/mnt/memdisk# find DST | wc -l 136837 devuan:/mnt/memdisk#

    The overall copy time for dar is:

    • 9.18 seconds for the big file in SRC1
    • 16.78 seconds for the full linux system in SRC2
    Rsync
    devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time rsync -arH SRC1 DST 23.088u 5.420s 0:15.28 186.5% 0+0k 168+0io 4pf+0w devuan:/mnt/memdisk# diff -r SRC1/ DST/SRC1/ devuan:/mnt/memdisk# echo $? 0 devuan:/mnt/memdisk# rm -rf DST devuan:/mnt/memdisk# time rsync -arH SRC2 DST 22.408u 8.560s 0:16.59 186.6% 0+0k 1224+0io 6pf+0w devuan:/mnt/memdisk# du -sh SRC2 DST 4.1G SRC2 4.1G DST devuan:/mnt/memdisk# find DST | wc -l 136838 devuan:/mnt/memdisk#

    The overall copy time for rsync is:

    • 15.28 seconds for the big file in SRC1
    • 16.59 seconds for the full linux system in SRC2
    Tar
    devuan:/mnt/memdisk# cd SRC1 devuan:/mnt/memdisk/SRC1# time tar -cf ../copy.tar * 0.343u 2.756s 0:03.10 99.6% 0+0k 104+0io 1pf+0w devuan:/mnt/memdisk/SRC1# cd ../ devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time tar -xf ../copy.tar 0.339u 3.071s 0:03.41 99.7% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# cd .. devuan:/mnt/memdisk# diff -r SRC1/ DST/ devuan:/mnt/memdisk# echo $? 0 devuan:/mnt/memdisk# rm -rf DST copy.tar devuan:/mnt/memdisk# cd SRC2 devuan:/mnt/memdisk/SRC2# time tar -cf ../copy.tar * tar: tmp/.ICE-unix/19789: socket ignored tar: tmp/.X11-unix/X0: socket ignored 0.760u 2.887s 0:03.66 99.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/SRC2# cd .. devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time tar -xf ../copy.tar 0.814u 3.556s 0:04.38 99.5% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# cd .. devuan:/mnt/memdisk# du -sh SRC2 DST 4.1G SRC2 4.1G DST devuan:/mnt/memdisk# find DST | wc -l 136834 devuan:/mnt/memdisk#

    the overall copy time for tar is:

    • 6.51 seconds for the big file in SRC1
    • 8.04 seconds for the full Linux system in SRC2
    Cp
    devuan:/mnt/memdisk# time cp --preserve -r SRC1 DST 0.051u 2.514s 0:02.58 99.2% 0+0k 8+0io 1pf+0w devuan:/mnt/memdisk# diff -r SRC1 DST devuan:/mnt/memdisk# echo $? 0 devuan:/mnt/memdisk# rm -rf DST devuan:/mnt/memdisk# time cp --preserve -r SRC2 DST 0.910u 4.194s 0:05.15 99.0% 0+0k 288+0io 1pf+0w devuan:/mnt/memdisk# du -sh SRC2 DST 4.1G SRC2 4.2G DST devuan:/mnt/memdisk# find DST | wc -l 136838 devuan:/mnt/memdisk# find DST/SRC2/tmp/ -ls 2315983 0 drwxrwxrwt 4 root root 100 Dec 3 10:32 DST/SRC2/tmp/ 2315987 0 drwxrwxrwt 2 root root 60 Dec 3 10:27 DST/SRC2/tmp/.ICE-unix 2315988 0 srwxrwxrwx 1 denis denis 0 Dec 3 10:27 DST/SRC2/tmp/.ICE-unix/19789 2315985 0 drwxrwxrwt 2 root root 60 Dec 3 10:32 DST/SRC2/tmp/.X11-unix 2315986 0 srwxrwxrwx 1 root root 0 Dec 3 10:32 DST/SRC2/tmp/.X11-unix/X0 2315984 4 -r--r--r-- 1 root root 11 Dec 3 10:32 DST/SRC2/tmp/.X0-lock devuan:/mnt/memdisk#

    The overall copy time for cp is:

    • 2.58 seconds for the big file in SRC1
    • 5.15 seconds for the full Linux system in SRC2

    cp is always the fastest and does reject the unix sockets as tar does. However it requires slightly more storage than all other softwares tested here. And if metadata (ACL, Extended Attributes, filesystem specific attributes, ...) need to be copied with data, it does not match the need.

    Performance of Backup operations

    For reference:

    devuan:/mnt/memdisk# du -sb state-* 4095931349 state-1 4136318367 state-2 devuan:/mnt/memdisk# find state-1 | wc -l 136837 devuan:/mnt/memdisk# find state-2 | wc -l 137239 devuan:/mnt/memdisk#
    Dar
    Minimal features
    devuan:/mnt/memdisk# time dar -c dar-full -R state-1 -at -1 0 -z6 -q 145.970u 3.263s 2:29.73 99.6% 0+0k 12344+0io 71pf+0w devuan:/mnt/memdisk# time dar -c dar-diff -R state-2 -A dar-full -at -1 0 -z6 -q -asecu 8.957u 0.959s 0:09.93 99.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 49498524 Dec 3 16:17 dar-diff.1.dar -rw-r--r-- 1 root root 1580562224 Dec 3 16:16 dar-full.1.dar devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time dar -x dar-full -R DST -q 18.677u 4.244s 0:22.94 99.8% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -x dar-diff -R DST -q -w 2.585u 0.780s 0:03.48 96.5% 0+0k 1856+0io 20pf+0w devuan:/mnt/memdisk# devuan:/mnt/memdisk# time dar -x dar-full -R DST -q -w -g etc/fstab 0.934u 0.036s 0:00.98 97.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    for dar with minimal features (metadata no redundancy -at, no sparse file consideration -1 0):

    • data reduction on storage for the full backup is: 61.41%
    • data reduction on storage for the diff backup is: 98.80%
    • data reduction over the network is the same as on storage
    • execution time to restore a single file is: 0.98 s
    • execution time to restore the full backup: 22.94 s
    • execution time ot restore the diff backup: 3.48 s
    • full backup time: 149.73 s
    • diff backup time: 9.93 s
    sparse file
    devuan:/mnt/memdisk# rm -rf DST *.dar devuan:/mnt/memdisk# time dar -c dar-full -R state-1 -at -z6 -q 154.971u 3.000s 2:37.99 99.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -c dar-diff -R state-2 -A dar-full -at -z6 -q -asecu 9.488u 0.871s 0:10.37 99.8% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 49505251 Dec 3 16:27 dar-diff.1.dar -rw-r--r-- 1 root root 1578428790 Dec 3 16:25 dar-full.1.dar devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time dar -x dar-full -R DST -q 24.231u 6.110s 0:30.36 99.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -x dar-diff -R DST -q -w 2.677u 0.793s 0:03.48 99.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -x dar-full -R DST -q -w -g etc/fstab 1.067u 0.053s 0:01.13 98.2% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    for dar with default features (metadata no redundancy -at, with sparse file consideration (activated by default))

    • data reduction on storage for the full backup is: 61.46%
    • data reduction on storage for the diff backup is: 98.80%
    • data reduction over the network is the same as on storage
    • execution time to restore a single file is: 1.13 s
    • execution time to restore the full backup: 30.36 s
    • execution time ot restore the diff backup: 3.48 s
    • full backup time: 157.99 s
    • diff backup time: 10.37 s
    Sparse file and binary delta
    devuan:/mnt/memdisk# rm -rf DST *.dar devuan:/mnt/memdisk# time dar -c dar-full -R state-1 -at -z6 --delta sig -q 159.262u 3.332s 2:42.62 99.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -c dar-diff -R state-2 -A dar-full -at -z6 -q -asecu 6.149u 0.950s 0:07.11 99.7% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# ls -l *.dar -rw-r--r-- 1 root root 23883368 Dec 3 16:39 dar-diff.1.dar -rw-r--r-- 1 root root 1602481058 Dec 3 16:38 dar-full.1.dar devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time dar -x dar-full -R DST -q 24.169u 6.163s 0:30.35 99.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -x dar-diff -R DST -q -w 2.481u 0.942s 0:03.44 99.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time dar -x dar-full -R DST -q -w -g etc/fstab 1.205u 0.059s 0:01.27 98.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    for dar with advanced features (metadata no redundancy -at, with binary delta computation --delta sig):

    • data reduction on storage for the full backup is: 60,87%
    • data reduction on storage for the diff backup is: 99.42%
    • data reduction over the network is the same as on storage
    • execution time to restore a single file is: 1.27 s
    • execution time to restore the full backup: 30.35 s
    • execution time ot restore the diff backup: 1.27 s
    • full backup time: 162.62 s
    • diff backup time: 7.11 s
    Rsync
    minimal features
    devuan:/mnt/memdisk# mkdir rsync-backup devuan:/mnt/memdisk# time rsync -arHz --info=stats state-1/* rsync-backup sent 1,585,540,014 bytes received 2,174,472 bytes 10,080,726.90 bytes/sec total size is 4,260,538,564 speedup is 2.68 202.640u 8.503s 2:36.98 134.5% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# devuan:/mnt/memdisk# time rsync -arHz --info=stats --no-whole-file state-2/* rsync-backup sent 29,077,377 bytes received 216,581 bytes 3,446,348.00 bytes/sec total size is 4,300,916,222 speedup is 146.82 7.555u 1.115s 0:07.33 118.1% 0+0k 1784+0io 7pf+0w devuan:/mnt/memdisk# du -sb rsync-backup 4136318307 rsync-backup devuan:/mnt/memdisk# devuan:/mnt/memdisk# rm -rf state-1 devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time rsync -arHz --info=stats --no-whole-file rsync-backup/* DST sent 1,599,585,756 bytes received 2,181,147 bytes 10,105,784.88 bytes/sec total size is 4,300,916,222 speedup is 2.69 204.192u 8.306s 2:37.81 134.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time rsync -arHz --info=stats rsync-backup/etc/fstab DST/etc sent 44 bytes received 12 bytes 112.00 bytes/sec total size is 664 speedup is 11.86 0.001u 0.002s 0:00.00 0.0% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    for rsync with minimal features (no sparse file consideration):

    • data reduction on storage for the full backup is: 0%
    • data reduction on storage for the diff backup is: 0%
    • data reduction over the network for the full backup is: 61.23%
    • data reduction over the network for the diff backup is: 99.29%
    • execution time to restore a single file is: 0.003 s
    • execution time ot restore the full+diff backup: 157.81 s (this is not due to the --no-whole-file see next text)
    • full backup time: 156.98 s
    • diff backup time: 7.33 s
    sparse file + binary delta
    devuan:/mnt/memdisk# mkdir rsync-backup devuan:/mnt/memdisk# time rsync -arHSz --info=stats state-1/* rsync-backup sent 1,585,540,014 bytes received 2,174,460 bytes 8,605,498.50 bytes/sec total size is 4,260,538,564 speedup is 2.68 232.038u 13.137s 3:03.44 133.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# time rsync -arHSz --info=stats --no-whole-file state-2/* rsync-backup sent 29,077,381 bytes received 216,577 bytes 3,446,348.00 bytes/sec total size is 4,300,916,222 speedup is 146.82 7.305u 1.275s 0:07.04 121.7% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# du -sb rsync-backup 4136318307 rsync-backup devuan:/mnt/memdisk# rm -rf state-1 devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time rsync -arHSz --info=stats rsync-backup/* DST sent 1,599,585,756 bytes received 2,181,219 bytes 10,042,426.18 bytes/sec total size is 4,300,916,222 speedup is 2.69 205.089u 12.354s 2:38.39 137.2% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# devuan:/mnt/memdisk# time rsync -arHSz --info=stats rsync-backup/etc/fstab DST/etc sent 44 bytes received 12 bytes 112.00 bytes/sec total size is 664 speedup is 11.86 0.001u 0.002s 0:00.00 0.0% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    for rsync with advanced features (sparse file consideration -S option, binary delta --no-whole-file)

    • data reduction on storage for the full backup is: 0%
    • data reduction on storage for the diff backup is: 0%
    • data reduction over the network for the full backup is: 60.64%
    • data reduction over the network for the diff backup is: 99.28%
    • execution time to restore a single file is: 0.003 s
    • execution time to restore the full+diff backup: 158.39 s
    • full backup time: 183.44 s
    • diff backup time: 7.04 s
    Tar
    minimal features
    devuan:/mnt/memdisk# cd state-1 devuan:/mnt/memdisk/state-1# time tar --listed-incremental=../snapshot.file -czf ../tar-full.tar.gz * tar: tmp/.ICE-unix/19789: socket ignored tar: tmp/.X11-unix/X0: socket ignored 153.624u 8.676s 2:31.71 106.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/state-1# time tar --listed-incremental=../snapshot.file -czf ../tar-diff.tar.gz * 0.809u 0.369s 0:00.98 118.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/state-1# cd .. devuan:/mnt/memdisk# ls -l tar* -rw-r--r-- 1 root root 765425 Dec 3 16:49 tar-diff.tar.gz -rw-r--r-- 1 root root 1546464033 Dec 3 16:48 tar-full.tar.gz devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time tar -xzf ../tar-full.tar.gz 27.106u 6.756s 0:26.72 126.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# cd .. devuan:/mnt/memdisk# diff --no-dereference -r state-1 DST Only in state-1: .cache Only in state-1/tmp/.ICE-unix: 19789 Only in state-1/tmp/.X11-unix: X0 devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time tar -xzf ../tar-diff.tar.gz 0.183u 0.085s 0:00.18 144.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST#

    Doing that way, the tar differential backup is empty: it only contains empty directories, no file data. We will apply the changes over state-1 rather than already setup changes at state-2. This seems to mean that if the system clock is wrong or was wrong at the time a file was modified (like daylight saving? or before NTP synchronization at system startup), which is the same as here, were the changes have been brought before full backup was done, those changes will not be backed up by tar, while the file's attributes (file size, last modification date,...) changed.

    devuan:/mnt/memdisk# cd state-1 devuan:/mnt/memdisk/state-1# time tar --listed-incremental=../snapshot.file -czf ../tar-full.tar.gz * tar: tmp/.ICE-unix/19789: socket ignored tar: tmp/.X11-unix/X0: socket ignored 150.751u 8.299s 2:28.59 107.0% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/state-1# cd .. devuan:/mnt/memdisk# dar -x ~denis/tmp/diff -R state-1 --fsa-scope none -u "*" -w -q devuan:/mnt/memdisk# cd state-1 devuan:/mnt/memdisk/state-1# time tar --listed-incremental=../snapshot.file -czf ../tar-diff.tar.gz * 6.147u 0.559s 0:06.40 104.5% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/state-1# cd .. devuan:/mnt/memdisk# ls -l tar* snapshot.file -rw-r--r-- 1 root root 3350869 Dec 3 17:08 snapshot.file -rw-r--r-- 1 root root 44607904 Dec 3 17:08 tar-diff.tar.gz -rw-r--r-- 1 root root 1546448179 Dec 3 17:04 tar-full.tar.gz devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd dst dst: No such file or directory. devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time tar -xzf ../tar-full.tar.gz 26.807u 7.020s 0:26.72 126.5% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# time tar -xzf ../tar-diff.tar.gz 1.492u 0.381s 0:01.48 126.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# time tar -xzf ../tar-full.tar.gz etc/fstab 25.219u 2.581s 0:25.15 110.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST#

    for tar with minimal features (no sparse file consideration):

    • data reduction on storage for the full backup is: 62.16%
    • data reduction on storage for the diff backup is: 98.92%
    • data reduction over the network is the same as on storage
    • execution time to restore a single file is: 25.15 s
    • execution time to restore the full backup: 26.72 s
    • execution time ot restore the diff backup: 1.48 s
    • full backup time: 148.59 s
    • diff backup time: 6.40 s
    sparse file

    Whe had to recreate state-1 as we needed modifying it at previous test

    devuan:/mnt/memdisk# rm tar* snapshot.file devuan:/mnt/memdisk# cd state-1 devuan:/mnt/memdisk/state-1# time tar --listed-incremental=../snapshot.file -czSf ../tar-full.tar.gz * tar: tmp/.ICE-unix/19789: socket ignored tar: tmp/.X11-unix/X0: socket ignored 152.878u 10.155s 2:29.38 109.1% 0+0k 1520+0io 18pf+0w devuan:/mnt/memdisk/state-1# dar -x ~denis/tmp/diff --fsa-scope none -u "*" -w -q devuan:/mnt/memdisk/state-1# time tar --listed-incremental=../snapshot.file -czSf ../tar-diff.tar.gz * 6.369u 0.752s 0:06.55 108.5% 0+0k 3992+0io 16pf+0w devuan:/mnt/memdisk/state-1# cd .. devuan:/mnt/memdisk# ls -l tar* snap* -rw-r--r-- 1 root root 3350870 Dec 3 17:29 snapshot.file -rw-r--r-- 1 root root 44604194 Dec 3 17:29 tar-diff.tar.gz -rw-r--r-- 1 root root 1546226992 Dec 3 17:27 tar-full.tar.gz devuan:/mnt/memdisk# rm -rf DST devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time tar -xzSf ../tar-full.tar.gz 27.331u 7.774s 0:26.27 133.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# time tar -xzSf ../tar-diff.tar.gz 1.547u 0.487s 0:01.50 134.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# time tar -xzSf ../tar-full.tar.gz etc/fstab 25.068u 2.565s 0:25.00 110.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST#

    for tar with advanced features (sparse file consideration -S option):

    • data reduction on storage for the full backup is: 62.16%
    • data reduction on storage for the diff backup is: 98.92%
    • data reduction over the network is the same as on storage
    • execution time to restore a single file is: 25.0 s
    • execution time to restore the full backup: 26.27 s
    • execution time ot restore the diff backup: 1.50 s
    • full backup time: 149.38 s
    • diff backup time: 6.55 s

    Ciphering performance

    We evaluate here ciphering and deciphering performance. To compare on the same base we use the following parameters:

    • AES-256 algorithm with CBC mode
    • pkcs5 v2 (pbkdf2) key derivation function (KDF) algorithm
    • KDF with 100,000 iterations
    • salt
    • password provided on command-line (insecure) to not depend on user or disk access
    • local system to backup and backup repository on a tmpfs filesystem
    • swap has been disabled to avoid tmpfs latency in case it would have been swapped out
    To measure the ciphering time only, we will not use compression, though most of the time compression should be used due to the use case encryption matches: relatively long time storage and/or costing cloud space and network transfer time or limited removable media storage.

    The content that will be backed up is a copy of /usr directory tree. We will measure:

    • the time to backup
    • the time to restore the whole backup
    • and the time to restore just the "diff" binary

    devuan:/mnt/memdisk# mkdir SRC devuan:/mnt/memdisk# cp --preserve -r /usr SRC devuan:/mnt/memdisk# time dar -c backup -K "aes256:hello world!" --kdf-param 100000:sha1 -R SRC -q -at -1 0 9.213u 3.245s 0:05.38 231.4% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# time dar -x backup -K "hello world!" -R DST -q Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted 4.481u 2.628s 0:03.75 189.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# rm -rf DST/usr devuan:/mnt/memdisk# time dar -x backup -K "hello world!" -R DST -q -g usr/bin/diff Warning, the archive backup has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted 0.419u 0.025s 0:00.42 102.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk#

    For dar the operation took:

    • 5.38 seconds to backup with encryption
    • 3.75 seconds to restore the whole ciphered backup
    • 0.42 seconds to restore a single file from the ciphered backup

    dar is twice quicker to uncipher than to cipher the whole archive, but restoring a particular file is quite immediate. By default, dar uses argon2 for KDF, which is the most secure algorithm as of year 2020 to derive a key, but we had to adapt to openssl used with tar that does not (yet) support this algorithm.

    To avoid plain-text attack a variable length elastic buffer containing random data is encrypted with the rest of the backed up files at the beginning and at the end of the backup, this has some performance penalties (time to generate and time to cipher/decipher). This explains why two identical invocations of dar produce backups of different sizes and execution times:

    devuan:/mnt/memdisk# time dar -c backup -K "aes256:hello world!" -at -1 0 -R SRC -q -w 9.782u 3.413s 0:06.28 210.0% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# la backup.1.dar -rw-r--r-- 1 root root 1572706497 Nov 9 14:50 backup.1.dar devuan:/mnt/memdisk# time dar -c backup -K "aes256:hello world!" -at -1 0 -R SRC -q -w 9.173u 2.845s 0:05.50 218.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk# la backup.1.dar -rw-r--r-- 1 root root 1572655217 Nov 9 14:50 backup.1.dar devuan:/mnt/memdisk#

    rsync has no way to store the backup ciphered. Testing directly tar now:

    tar has not support for ciphering. Though it seems the some use openssl workaround this restriction. To measure the execution time we have to create as script that pipes tar and openssl so we can measure the execution time of this script as a whole. There is thus one script for backup and one for the restoration of tar+openssl.

    devuan:/mnt/memdisk# cat > tar.backup #!/bin/bash if [ -z "$1" ] ; then   echo "usage: $0 <backup name> [ <file or dir> ]"   exit 1 fi tar -cf - "$2" | openssl enc -e -aes256 -out "$1" -pbkdf2 -iter 100000 -salt -pass pass:"hello world!" devuan:/mnt/memdisk# devuan:/mnt/memdisk# chmod u+x tar.backup devuan:/mnt/memdisk# cd SRC devuan:/mnt/memdisk/SRC# time ../tar.backup ../backup.tar.crypted usr 3.954u 2.498s 0:04.69 137.3% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/SRC# cd .. devuan:/mnt/memdisk# ls -l backup.tar.crypted -rw-r--r-- 1 root root 1603594272 Nov 9 13:33 backup.tar.crypted devuan:/mnt/memdisk# devuan:/mnt/memdisk# cat > tar.restore #!/bin/bash if [ -z "$1" ] ; then   echo "usage: $0 <tar.crypted file> [<file or dir>]"   exit 1 fi openssl enc -d -aes256 -in "$1" -pbkdf2 -iter 100000 -salt -pass pass:"hello world!" | tar -x "$2" devuan:/mnt/memdisk# chmod u+x tar.restore devuan:/mnt/memdisk# rm -rf DST devuan:/mnt/memdisk# mkdir DST devuan:/mnt/memdisk# cd DST devuan:/mnt/memdisk/DST# time ../tar.restore ../backup.tar.crypted 1.807u 2.821s 0:02.70 171.1% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# devuan:/mnt/memdisk/DST# rm -rf usr devuan:/mnt/memdisk/DST# time ../tar.restore ../backup.tar.crypted usr/bin/diff 1.336u 1.428s 0:01.79 153.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/DST# find . ./usr ./usr/bin ./usr/bin/diff devuan:/mnt/memdisk/DST#

    For tar the operation took:

    • 4.69 seconds for backup
    • 2.70 seconds to restore the whole backup
    • 1.79 seconds to restore a single file

    tar as dar is also twice longer to cipher than to decipher, this seems to be related to the algorithm itself. Though tar is a bit faster than dar but lacks protection against clear-text: the generated encrypted backup have the exact same sizes at one byte precision, this means the blocks boundaries and tar file internal structure always lay at the same file offset for a given content:

    devuan:/mnt/memdisk/SRC# time ../tar.backup ../backup.tar.crypted usr 4.112u 2.343s 0:04.72 136.6% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/SRC# ls -l ../bac backup.1.dar backup.tar.crypted devuan:/mnt/memdisk/SRC# ls -l ../backup.tar.crypted -rw-r--r-- 1 root root 1603594272 Nov 9 14:56 ../backup.tar.crypted devuan:/mnt/memdisk/SRC# time ../tar.backup ../backup.tar.crypted usr 3.952u 2.564s 0:04.79 135.9% 0+0k 0+0io 0pf+0w devuan:/mnt/memdisk/SRC# ls -l ../backup.tar.crypted -rw-r--r-- 1 root root 1603594272 Nov 9 14:56 ../backup.tar.crypted devuan:/mnt/memdisk/SRC#

    Scripts used in this benchmark

    The following scripts are also available for download from the directory

    historization_feature script

    #!/bin/bash if [ -z "$1" -o -z "$2" ] ; then echo "usage: $0 <dir> {phase1 | phase2}" exit 1 fi dir="$1" phase="$2" case "$phase" in phase1) if [ -e "$dir" ] ; then echo "$dir exists, remove it first" exit 2 fi mkdir "$dir" echo "Hello World!" > "$dir/A.txt" echo "Bonjour tout le monde !" > "$dir/B.txt" ;; phase2) if [ ! -d "$dir" ] ; then echo "$dir does not exist or is not a directory, run phase1 first" exit 2 fi rm -f "$dir/A.txt" echo "Buongiorno a tutti !" > "$dir/C.txt" ;; *) echo "unknown phase" exit 2 ;; esac

    always_change script

    #!/bin/bash if [ -z "$1" ] ; then echo "usage: $0 <filename>" exit 1 fi while /bin/true ; do touch "$1" ; done

    bitflip script

    #!/bin/bash if [ -z "$1" -o -z "$2" ] ; then echo "usage: $0 <offset in bit> <file>" echo "flip the bit of the file located at the provided offset" exit 1 fi offbit=$1 file="$2" offbyte=$(( $offbit / 8 )) bitinbyte=$(( $offbit - ($offbyte * 8) )) readbyte=`xxd -s $offbyte -p -l 1 "$file"` mask=$(( 1 << $bitinbyte )) newbyte=$(( 0x$readbyte ^ $mask )) hexanewbyte=`printf "%.2x" $newbyte` echo $hexanewbyte | xxd -p -l 1 -s $offbyte -r - "$file"

    build_test_tree.bash script

    #!/bin/bash if [ -z "$1" ] ; then echo "usage: $0 <directory>" exit 1 fi if [ -e "$1" ] ; then echo "$1 already exists, remove it or use another directory name" exit 1 fi if ! dar -V > /dev/null ; then echo "need dar to copy unix socket to the test tree" exit 1 fi mkdir "$1" cd "$1" # creating mkdir "SUB" dd if=/dev/zero of=plain_zeroed bs=1024 count=1024 dd if=/dev/urandom of=random bs=1024 count=1024 dd if=/dev/zero of=sparse_file bs=1 count=1 seek=10239999 ln -s random SUB/symlink-broken ln -s ../random SUB/symlink-valid mkfifo pipe mknod null c 3 1 mknod fd1 b 2 1 dar -c - -R / -g dev/log -N -Q -q | dar -x - --sequential-read -N -q -Q ln sparse_file SUB/hard_linked_sparse_file ln dev/log SUB/hard_linked_socket ln pipe SUB/hard_linked_pipe # modifying dates and permissions sleep 2 chown nobody random chown -h bin SUB/symlink-valid chgrp -h daemon SUB/symlink-valid sleep 2 echo hello >> random sleep 2 cat < random > /dev/null # adding Extend Attributes, assuming the filesystem as user_xattr and acl option set setfacl -m u:nobody:rwx plain_zeroed && setfattr -n "user.hello" -v "hello world!!!" plain_zeroed || (echo "FAILED TO CREATE EXTENDED ATTRIBUTES" && exit 1) # adding filesystem specific attributes chattr +dis plain_zeroed

    hide_change script

    #!/bin/bash if [ -z "$1" ] ; then echo "usage: $0 <filename> [<bit offset>]" echo "modify one bit and hide the change" exit 1 fi atime=`stat "$1" | sed -rn -s 's/^Access:\s+(.*)\+.*/\1/p'` mtime=`stat "$1" | sed -rn -s 's/^Modify:\s+(.*)\+.*/\1/p'` bitoffset="$2" if [ -z "$bitoffset" ] ; then bitoffset=2 fi ./bitflip "$bitoffset" "$1" touch -d "$mtime" "$1" touch -a -d "$atime" "$1" dar-2.7.17/doc/README0000644000175000017520000000026214041360213010720 00000000000000 Dar Documentation Main Directory All the documentation has been moved to HTML. To access it, please point your web browser to the index.html file found in this directory. dar-2.7.17/doc/Makefile.am0000644000175000017520000000507014740173721012112 00000000000000####################################################################### # dar - disk archive - a backup/restoration program # Copyright (C) 2002-2024 Denis Corbin # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # # to contact the author, see the AUTHOR file ####################################################################### SUBDIRS = samples mini-howto man dist_noinst_DATA = COMMAND_LINE Doxyfile portable_cp Benchmark_tools/README Benchmark_tools/always_change Benchmark_tools/bitflip Benchmark_tools/build_test_tree.bash Benchmark_tools/hide_change Benchmark_tools/historization_feature restoration_dependencies.txt dist_pkgdata_DATA = README Features.html Limitations.html Notes.html Tutorial.html Good_Backup_Practice.html FAQ.html api_tutorial.html dar_doc.jpg dar_s_doc.jpg index.html dar-catalog.dtd authentification.html dar_key.txt old_dar_key1.txt old_dar_key2.txt from_sources.html presentation.html usage_notes.html python/libdar_test.py style.css restoration-with-dar.html benchmark.html benchmark_logs.html index_dar.html index_internal.html index_libdar.html if USE_DOXYGEN DOXYGEN = @DOXYGEN_PROG@ all-local: Doxyfile.tmp Doxyfile.tmp: sed -e "s%##VERSION##%@PACKAGE_VERSION@%g" -e "s%##HAS_DOT##%@HAS_DOT@%g" -e 's%##SRCDIR##%$(abs_top_srcdir)%g' -e 's%##BUILDDIR##%$(abs_top_builddir)%g' '$(srcdir)/Doxyfile' > Doxyfile.tmp cd '$(top_srcdir)' ; $(DOXYGEN) '$(abs_top_builddir)/doc/Doxyfile.tmp' if [ -d html/search ]; then chmod u+x html/search ; fi clean-local: rm -rf html Doxyfile.tmp doxygen_sqlite3.db install-data-hook: '$(srcdir)/portable_cp' html $(DESTDIR)$(pkgdatadir) $(INSTALL) -d $(DESTDIR)$(pkgdatadir)/python $(INSTALL) -m 0644 '$(srcdir)/python/libdar_test.py' $(DESTDIR)$(pkgdatadir)/python uninstall-hook: rm -rf $(DESTDIR)$(pkgdatadir)/html rm -rf $(DESTDIR)$(pkgdatadir)/python rmdir $(DESTDIR)$(pkgdatadir) || true else all-local: clean-local: install-data-hook: uninstall-hook: endif dar-2.7.17/doc/COMMAND_LINE0000644000175000017520000001434114767475172011705 00000000000000Status flags: --------------- ! : not used - : used without arg + : used with argument * : used with optional argument # : obsolete --------------- letters usage: --------------- a * alteration of operation --alter argument: a[time] binary[-unit[s]] b[lind-to-signatures] c[time] case d[ecremental] do-not-compare-symlink-mtime duc e[rase_ea] f[ixed-date] file-auth[entication] g[lob] h[oles-recheck] header i[gnore-unknown-inode-type] k[eep-compressed] l[axist] list-ea m[ask] n[o-case] p[lace] r[egex] s[aved] secu SI[-unit[s]] t[ape-marks] test-self-reported-bug u[nix-sockets] vc / verbose-libcurl y abyte[s] z[eroing-negative-dates] b - terminal bell --bell c + create archive --create d + difference with filesystem --diff e - simulate the operation --empty (aka dry-run) f - do not restore directory structure --flat g + recurse in this directory --go-into h - help usage --help i + path of the input pipe --input j + retry delay for networking errors --network-retry-delay k + do not deleted files dropped between two backups --no-delete / --deleted { ignore | only } l + list archive contents --list m + minimum size below which no compression will occur. --mincompr a default value is used. use -m 0 if you want to compress in any case. n - do not allow overwriting --no-overwrite o + path of the output pipe --output p + pause before creating new slice --pause q - suppress final statistics report --quiet r - do not overwrite more recent file --recent [=date] (to implement) s + size of slice --slice t + test archive structure --test u + exclude EA from operation (mask) --exclude-ea v + verbose output --verbose --verbose={skipped|treated|messages|dir|masks|all} w * do not warn before overwriting --no-warn x + extract archive --extract y + repair archive --add-missing-catalogue z * compression --gzip --compress A + make a differential backup --ref B + filename taken as command-line extension --config C + extract catalogue in separated file --isolate D - store excluded dir as empty dir --empty-dir E + shell command to launch with slices --execute F + shell command to launch with slices of archive of reference --execute-ref G + multi-thread management --multi-thread H * if a file for differential backup or diff differs from exactly one hour it is assumed as identical (no change). I + include mask --include J + key for unscrambling the reference catalogue --key-ref K + key for un/scrambling --key L - license information M - only consider what is under the current mounting point. --no-mount-points N - do not read any configuration file ~/.darrc or /etc/darrc O + ignore user Id and group Id --ignore-owner P + prune a directory tree --prune Q + quite on stderr at startup no long option equivalent R + set the root directory --root S + size of the first slice --first-slice T + (listing) tree listing format --tree-format, --list-format= T + (create/merge/isolate) interation count for key derirvation --kdf-param U + include EA for the operation (mask) --include-ea V - version information --version W - warranty disclosure information (POSIX RESERVED, sorry) X + exclude mask --exclude Y + included only filename for compression --include-compression Z + excluded filename from compression --exclude-compression - don't save files with nodump flag set --nodump # + encryption block size --crypto-block * + encryption block size for the archive of reference --crypto-block-ref , - cache directory tagging standard --cache-directory-tagging [ + include files listed in given file --include-from-file ] + exclude files listed in given file --exclude-from-file - x (forbidden by getopt) ? x (forbidden by getopt) : x (forbidden by getopt) + + merging operation --merge @ + second reference archive for merging --aux-ref $ + key for auxilliary refence archive --aux-key ~ + command to execute between slices --aux-execute % + encryption block size for the auxilliary archive of reference --aux-crypto-block / + policy to solve overwriting conflict --overwriting-policy ^ + ownership and permission of generated slices --slice-mode _ + retry on change --retry-on-change { + include files for delta signature --include-delta-sig } + exclude files for delta signature --exclude-delta-sig 0 * sequential read of the archive --sequential-read 1 + sparse-file detection tunning --sparse-file-min-size 2 + dirty file behavior --dirty-behavior {ignore|no-warn} 3 + create a hash algorithm (+algo) --hash 4 + filesystem specific attr. family --fsa-scope 5 + exclude file having a given EA set --exclude-by-ea [] 6 + minimum size for delta signatures --delta-sig-min-size 7 + define keys used to sign the archive --sign 8 + delta binary diff --delta sig, --delta patch 9 + min_digits --min-digits archive[,ref[,aux]]; " + anonymous pipe descriptor to read conf from. --pipe-fd ' + how to detect modified date in diff backup --modified-data-detection= {any-change | crc-comparison} . + user comment --user-comment ; x (forbidden by getopt) < + backup hook mask --backup-hook-include > + backup hook mask --backup-hook-exclude = + backup hook execute --backup-hook-execute \ + ignored as a symlinks --ignored-as-symlinks [:/ Ú¯==>|UÎö~_èçùŸ¤¢Î»†¦mºþå^ÍêYãŸ—Óæ}¬ ÌÞ„W(Û«\”íÒñéP³f:ÿ#õ8{||šº”}_;rKõì’¯™þ³så¾§¨õ€žÂ*“a{-W˜¼õ|»[™ø½²aÌåÛŸK±D!üïc'e?Iö_övÙÖ˧R¥šÔú0Ù£jÿ+Î=¯Þræ )—6¤×Óbþ}Õ­™¶ƒ ´°ëñêüÆÎo¹óµº—rÎÖhgaßðz{Ed!I‘‡Uö,ŸºqINYbê(Nžo³Ã|»;WùçÍ“UlÞ’ìù4¾–…Wæ{rÏ.—C.Õm<÷7¤€ <ù¾ŸÌÁÔýKwF7Å*Ü´qÍŽaUEç;™-Éc¯Ž-Ô§W¶³4/Á^Õ+4ß^ì[æÕ̹›/WbÖn¥9yóÕ˜)Ë[#ž¹ ä¸v<{ÓµðìàW°zr>ºw•éê'VÛéC3ÍDè§4žÕ¢œñÍv:7iÙ³4±É™ ´<›6hçhÅ<Þó^Æ"¥è;æ›(fóþ˜#0qÙÏœ¥k´êÉÑ•Ö6É3ÿÄ*!1 "02P#@A`3ÿÚÿ¤ÏéIÀ¿T©ˆE×F°§#ôZëö-ö–h˜›ö·¿rþ…¼qGùrUl6g |2øýx×!kŸáE_ȵV[XSÃοÇôÆ´wL² 騏ütù)9¯ô:ÑßJ;FœAIm9Úú+w×ú`éÔãå*§3ˆé¾7e†^Â&Yv™´ÃÛû˜› šÚÝ.ÒÓüb¡•\MB‡M&ŸOÑ «‘2ÇŠ;rÏß¼N¤êMðû–j«•þ<­ü8cÓ™ªfZiÕj5¯û7W­±%V¥Õ“3ˆŸ¶Ó‚¨H®­c€¶g˜—ZµV‘äPØçª8§† ´™aŠ›æ&y<7r–|Bð[‡úÏ`f©²q+$>#~' %‘X'¿3…Ü,Ѭ°bÅ9UÝ4½£óQU…Ú—éß5iбcâÆT¡íúÜË›ee²s*îêLÕÙŠ…ÄD=G¯²êmZjwg~ Çbgt¡¹_)üŒ0‚U-µ#ZbUm§MBÒ>¦8[fçå¥í‰Þ¾£¨'§j Æ5îEœ+ãjò_‹Jÿôåm‹XëW-¸m¥âŒÊº*1õ±ÉÔ?N¢y ÜÁp.³£KuKpÝ5—\ªË£ µz¥:jÒ/+N%ºÍ5Û¥î·32÷]Ÿ'&Éx€w–:5*ÿSžÓVw¹Hµ–4éÅk5Uõ£ÑRÓUޱ‰•)Ípw õ zhÑbղåǺ&W`›DÔR€TªÃ¢³¤±ß§9'ǧÓ,(%–µB#ªcX ÚÄ¢ÃXˆ‘VZåJ·&øYþY?Ë{¦”æž}=§šö€û­<ûÎð¢Æ{"iÌéâm›¢c0'-Pø«b!ÈÖӨ/‹üÛ8qÊcØG13ö”dÛ1109Øøm±z¬Á.xµ"û-JW¨â0ÊéÆ+³ñOÇQÇ8»–9ØvÁìéfÚP ýJW€Ç 1-BÁ©yUV%ÞÛ°¬:žXä ÜÇŠ“ågTØîûik0öm^³dZIf ÁÖ ϰ9«úv‰¶bbbcÝp.­ª7EŸCaW%e§ùœ¬¹Æi;…•™`~“«ôÓ+WWtOäÓföºÝ‡OÔõ p£0ºˆ?ÑÀŽŠð(«É³JÄz[„¢»-G7N¶¾Êi;«6a•²®äðDQ^ëk[.ÀÞ,©£"7p•÷cû,ªÐvB9T ƒNÁ«B¥¼)´A5vtØwR1˜ƒÅgç`øÔr¿ eV`K4ᢌ-´Z/±w \.ÿ¿ÿÄ3 !1A"2 Q$@Baq0CRS`±áðÿÚ?þ…&GÜëa¤eÇO @…ËçÄ+ø¿br”;Á˜Ÿ¬žÈò΂UÚõñ©ôÎ!Lop•â˹ö™GÕÝ"\ÄŸLKCk¸|ÙïÙ]ƒ¼vIÒ±,Þ9ľêéÄjGª1ÇÌUõºëCÚ2ÛjLCw(Æ—æ.¤Ec\þ–ïMZ‡y‹] ü·°¥w”órlfØ…ì^¦÷ÆÚbän_é<ÍŒ™gT™}¤½àµ™ÈJcéÎÑ+¥ºŸ/ÈýØf¦*»ŒkÅD®F±T,æc©AÚWJôÛQï7İstÜyË?NQ`úm@¡Šâ4'K@uéâö…St¾¶$™ðÔZÝ+*†i¾â VyÜùE­íè áæµÝcÊ`Svêý`YSl DÈÃ`&Àr'™¯_ lÛ¤éÚ)ÕhÖ¾ UYÊÂÃ+øer±ÁÛS21ÃÆÒ`áËxÊ_´x¨Ò;ö<¦;TÏPå8†)±ßA¶Ts>ØÜe7Ü#xGòÌbÞ‹ézòðK죭f6e\??ÛÚ©;zÏ%´ ‘(ÇÖÛ,O9\¶ŽU´øì€}Ò¼Aæûôôý"3rmp.10rÓmã”Gó[åšé1¸‘cM-_ÓýËfý +þþÑÙ¨M¶^Úg.¼É”½n5©×Àã(ö‡ '´*ÇɤÙ]uÒ e‚[VíÕ@9\jÖâðÒ' ª½½\ŒÆÂjZ-¤ÈóPýô® é8‚…m_¬ÏH«CÈÔO4²šŸâ,ÛméÞ.ûê-÷PF†Wtè!ÇY®Ý%((4xÿÄ)1! @AQ"#20BaÿÚ?÷Øö§Ù5ȼű[g¨ÓØ^itÅ<ž˜›LØßáì¿Ôì?×ULͳ`Ê4í`È•èØË(zùXnc©SƒêÓ¦NLJAeÕ솰ñôÄJiÞÁe‹´â8ùŠ2cÞii"-ˆ2íBºí²öÜ} dâiéÉ 6ÍA•¶\ÀøIVÕÏÜÕ¦Û ÆW¿íy€L™¸üK,a ϧNŸí4À*Ën"Ç̹•GˆÍÇÇ1Én`c¦Ë%€LcÌýc©³1ôçãÑZolM¸âîZ¬~ ÂgˆÖ¢Á~æÄq.à4È*!þ°X8è:YPhÊWžŠÅxçû×?3µc Ä}‡3òäkYºƒ˜oS,±JàJlm0:‘Ì·˜®Wˆº¸Œ¬¾FPÜË*+Ö yŠêÙÄÚ¬¹Äì«y"5 õ?vÆ©1àËkUÊ6 æ=\ˆ)þ=Ñjf&G=;;­;­éÜq‰Ý¹ßxÚ¦c ™\G²²ܶÄdÆec¸˜Ì çõ2‡ñ¶mÚs8hË´ãÙ5¬ÜÎëg1›qϸÿÄ2!1 "0AQq2@aPp‘¡#BR`bráÿÚ?ü‹µ/ýë*ô·ÑcõR®®h>(}êB‚€ÙL wZ „ÿªáhë€HÆáFª,õE”»iû/YPÇüC­kaµ.£› M,i#t½ØêaÈ©D|Ô·²ÒÕŽÀŒ(Å;M±W_ 5ä­Ÿp"Õ„jt´Çuf«ÙIá ú÷Vj“d/Í%M#ºÊ¥B/=s²S™ó4;ŸmÛ f|+Y`ù+¹êyºFuhËMnõ:Ÿ€ ›Ÿ4ò7¬¸¯XVº¹²ÂôU¹¤õéXPŸú"âd•¨ŽÖeÄŸ•ÿP _ÍuT ™Xpû)¹òT鬂¡Ö<Èè)IWÍŽ¢`䫬 Fà3Y¸QËÊʱ4ÒÙ^•1L,.«­¬÷Þ–Ç.7nV–4¸·VµdtCt·Ùåb¶cÙ[ge‡9q»OÀ]üî¿ÄP…;±í@ŒòàoYa§{„¨#šâ¥Ùì¬ô4Ÿ*I‘OB<Ë{MP§¢sˆ¹R<ÐîËfϺøRWÙ|!¤À›¨ÙååÅ•© š舕 g€©Q¹V>ˉYL®§taâ:)Ú.ÈÏTÆL­N R‰¸R¸H¬€%C‚Ò0(4:+ñð¯îx€4›ÝiV}–d+eq0Í‹æåtGÐáÀ™… ¯ÙuBÍ 1ãòÿÄ+!1AQa q0P‘¡@±ÁÑá`pðÿÚ?!ÿ‘Ù)Üû(XÎCÞfå˜öP‹pî±ÙÓ˜¿2üÌá™Wn~Ŷ&=˜ÂÖ¦P•9Ëí4ýˆqY ïÕn%¨¦!‹©pÒæ¼È¿aÍJÛÄ{DX€k˜T%ñ=ˆÕOk.bËV„™«ÍäŠIZþÒ㲇Pö"%Àò|3F[/3W˜‚¤‚M®‹Á-ƒË¡D¥Ôj˜\ÃéF"DPͨ»„P}‚¢Œ"Ô¢Îã,€úËEôéÚ{'½Yè a¨áEW¾ÐÑ1]‚쨩Áûª bŸ7ü‘Ö½ÊG8¦µý ÉAq;3Ø„í¾>­8NÄó4C)A`âéèÈ‹ÑÛ*M !J¹Á+®3´Ÿ"£›§àå†lcy® ‡)Aüá„&þ¢É2naz%8™˜c+ˆÖ,&ã-™všDÌÎãý2õä~"•O31ÐÉ«àé]™88C2\N?ÊUÞrÊh—µ¸¡±[—xó]sõ.j;â"'q7¿M^çG¤–6óüÌ}­íž2ý‘¨žäfZWM#§ £HÍÌ€ ¦ØÍBœOÙ˜EL0¿T íþYqfîL¨«X5óæsÁBÂüà]@|dGfQÅÇpŠŸ€‹[P]`¾¨áìžþl{Âü„o)^>¬|"„UvËŠF,ì>LɵèEîöèÙÖåšÊ•ç.iNm0ƒ «Ú]„g‘›[1‡ T+ØRÃä ~ Õj£Áp^Q5k¹Jš¿Ô­ÜNH¾cE¿’ Ù.Ph©ÜÍ/tfø^'“ŽaWðG®…íƒÞ-ƒ" Ø% 4†Öøí~SdÃ.öLOl}†´ÞɃæ¥öJWœgÌÅSPi0Ë)ƒOýÿÚ óÏ<óÏ<óÏ<óÏ<óÏ<óÏ<óÏ<⯼óÏ<óÏ<óÏ<óÏ<óæœóÏ<óÏ<óÏ<óÏPS±Üb”Ÿ8#Tó‚%Ÿšlߦ¾p[WØ™c‹ÒH±R°ÒæõUò“£Â(¾¹hXéÃþëòz?–Pϲ ÐE~dFÞ="gÞ^ËYR¼“©Î!;Ä»?~ìr¢P`*îs-Jí}Ä„Ƶø¨ÐMÅ:™La¼ÆK*Tk˜eˆf~ÄÄ»™`:Mã–#2ʶìžíÀ~7=<¿P+à„w¨<¨úŠf ;¥Ù3M‘ñ¨;©LRžÂXÿp¬vª^ð¨eƒï~,yGõÛGYnÒ[ Á2V"°’ÙñY€ö; Ë»&ñP[æõ,LôK-}–‚m½OÜ̓Æß{€È_C €Î9ï€Pã»>pÜ=„Ë™bÌ·TÔ?qpýÁ¶•ƒƽtù·à¿gÔzûéÙæ|Ç–ZUvª]¬¨SØNܱµ­c=:÷ùÀDiŽ ‰§dSƒæ0Õ‰Wv]x:ÔZ*ÞN¥b£lk¦Ç]N"2¬5ŽWË©I0i±-âÚ{ q'ø1ãKL’‡Q³Ò ”ÍÉ›iNŠ9g¾±2;ÃS/'‰ÓØD Ç›ŠñC²®’³M³UºÖ`G’ÏÄã˜8,óëïß- ‚ ú%é¡6qgSÄø€g'ðpÆEcÄzþ}÷Å/néÄoäÿÄ%!1A@Q qa‘±Áá0ÿÚ?ïj}‘à×d#´¦õv)c»_ •-ÑÙþpHD¸Diéºê4hˆwX‘ ML Ç“P*µž>%ÿyt $O•ιö޾ ûéRLèH4?©ž÷Œ2Üâ+=.C™PhÜU|A‹7à&g¿Â{ã0(¦mÆZ: ±Dˆ­ôÖ[âX³fË2v9•÷k-ŽÞ|±kwÉhH9^ËpîâÇ0EÊ#OT‡BR×XAj€fÓ‰‡‰„wl磬Oi™æ)] [®™yE©¨\¨Ù?nŒ‰“XÍ"Ÿ‚Ui¬Ù'ƒ-Ù%Àê¾:‹ñˆ£í/ÄýIJsæÜ•/|~âm¢øÿ%éüFÖˆµÅ(Tº€s>¬¸¥ô NS'GÑn+¨ð†Ž‰õˆ”ÄAž"ìoó+wŒûÆwÇ`)¨(Ù1ï’=ÎãÿÄ,!1AQaq0‘¡±ÁÑð P@áñ`pÿÚ?ÿÑ¡´9‚é½¥dÔU¬ºWëJèÜá´j—Q²—™˜<ÏôgN-[ÙÏÒ,<œȵ%õœ`øÂøÅA&=ôÞúÿD©‘hä@§ô€ naŒÃ.B߈+'£õˆhoú.úÃ’¡ÈdøÎ6‡!V¯;•è[6‡î£ØV©òÜyžês=ñÕ.kH~bÈo?ЊIvu‰•kŠéˆ^¤¯{yßýLTXæ¾qÍ;9o¿ÂÈjRî¨31¬efÀ×'ô.£ê4Ñ“©ùŠK»S) q©’¦¨[¹°Ç«•½9®ÿ½`A¼]îß'ôTaªÅ©þ˜õºCáŸÞ%8R_(dlÌ…¤!®²ôªê©ÂÞ²ßÍ|eP¸)¸'#ÌëÓÖ9¢ˆk¿Êà DLñ1Qb÷ô˜6Á»ë/…䚟Ê9¬]L´à¥YÛYsh`øJn³œ” ¢õÊΰ:ñ”sK<Ç…F5ÃaÈBïcü1Æ¥[ù&.¦Dsð5;ó5Û̓±;K°¯¥Þ,G°z–'Î_€ú ›ï)H %êJ±Ü„2ùùKÊJ#„6' W zFÌ«I5‰=í+³Ì^”&ªñâñàÜ4â# ›Î%0©:b`øöp oˆ‘,î\@Sa/'gY‹öUèÐY–Óo£“ýJMË6‹¢z5Xköã16¬w]ÿÔ±3‰jl,ô`Uî//ëstÛ\†ûçÞ.&zrÁl»üÄÂ? ‘͘¼k–îd×g§˜ê†ºLüã¯(•h¦Â½À7¢vžc‡£êü"Rõç´Â#‰¡5§¿î!öVQs©à¸Ô„VEKС‡PžF ‰Fù+ò†6VŽ#ÝØÝß´:ì 1¨¬¨À‚#AËŠ÷c_Fãð¡Þ\V³/XFÞÔ0êƒÁ¸â7|2Îÿ"˜åóꂞ®órñVeÀye¬Í[ö•%•௷ÎËñ1aKí­ìoþJö­}It#¢?’öÔk¹ÆíÊ9ZØŽ=Wí/â0ºÄÍÔE6¦X|އ»¹yuÛeŒ s "Ô&ØžFe’Ñ_/Øô˜—ÆYLF–xYCÇ2&¦J0_cæ½% ·ëŽ™šÎÏ¥J=™¥ÀÀ½Aí5i}àÜÒ<ß·°H+=eèPµBÕÆÎJ±2f<Ë9¤çïîa¥¼x{ËÚÑ‚h"‡«»KX¨víËÐŒâŠ&Z®ˆì ¸Úæ¥J®¸‡~}!Bª®‘‚Ö±/öR¦ÕÄ"`´„ü‚O¼ÃW;¤”]Ÿ¤, *Û~ Èç]*ÝÀ䎘Ì[„%eát—‘¸°x¨üß´µ•UWè-ùI5^ð«Ô­E-J˜Ì:£’ðÀQéáK=Î(æ)Ì0xáß–hŠØ( Wðmàös™f“…ƒ£U Qj½L¬©“£”ê<£ãpõ.U -uš!ß)+ Æ©k õ— õ¿ì§HŸ=ârÓ+Rê_›Ì²@YAÚLj°ìyŠ* µ îñ0ˆ¸œô¿(jÿ™"h -½K P³¬âÐŒ¹jº¢ÔÊÞUK‹ ³}7¨ò£Xb°À`x™"áºkÖ:@íGÞX ìúKM›Æ\>ï*ÊUu‹€\ºÏÆ6Zo¬xndHg®ñlUÇJm¦b·”¸-zÁ¶Ó€ö½‰™ G˜¤ònºFQr°e±Òèÿ,6w€NX.©q nؽ²¡råðEëH°³ñÚ°·JŠBS“!‹;Ë Ã¼NE^®;<’·¤‚[prøHjŒ+Y8ñؽ#g#¹¢ Fye›ÚË©l“Ö¿µ,ÎI~a2&ìE]˜`á·ñVî[€»| þQâ/‹ï?â [”¶hö6Ó¬üËE‹V+-¶”B[‹tòqõìb•­ = çwÃ2’Í–»¿x…0¨1TeL“Ä} ¼GszæÙríå+¿±d‚Ùå•[¬•’É­žÁ­G Ïpòê„ÂZ;žèZ:!ÔÞ:ÂhWÌ󀥸䩕ÈÁÒcj%œâñ/–±…Ã_Á[!` ysˆ ¾’„³ˆô1(În ä ´±Ö@_0/åè>±Ð#`@å)©VñÞ8*«RŸ¼ÝX`zBW@UYzBô ›ªðBKAmÁu{JÎy=#X¡r»‚%žÕi½=Š“Ð›ªžáÁœWJİKò`ÃÒt\ˆ³0ȹ—wŸõ¯ÌJÀ[Yz¡ö”¸»SªëóÜRÙDî4ª‰WÏ‚U;zB`-ß”©ð( ˜ª›ºõ¶cL6À?DvDÑÍ}¥Ò{|Xnw “‰†|+0’ÐU|a%k%/r²ë˜¤89–qßw,l€­Ste°‡oðP±=ˆF¨lÍB(JÔwÔåÖ=#_y–1æ8 \½øërœ*Âö\<:9q}.<¶ãs€Ûëð%—˜´SSdëÛè¹MRƒ qù°5Ý]¼ÕvÍf(—tn [*оÌLÅZF.ªm´/W LŽfÓ 9=HÇ‹)£/Ä•'0˜a.³Ó¤6-ÃþBÈL¥°Búæ `PHv¨.«|ËÜè–³ ÛØüÊ:©¢äg‡Î€¶ëQÂÖ,tÀ6è e01™^Gå4+äI\Z^âé]ö,zP­« :›uæ\¼å+VþWô<Þœ¢A€Ð$O@Ý = ²Æ‰Œ‡dºsfµˆùt2½HZ;³üÿÙdar-2.7.17/doc/python/0000755000175000017520000000000014767510034011456 500000000000000dar-2.7.17/doc/python/libdar_test.py0000644000175000017520000004515414403564520014251 00000000000000import libdar import os, sys # mandatory first action is to initialize libdar # by calling libdar.get_version() u = libdar.get_version() print("using libdar version {}.{}.{}".format(u[0],u[1],u[2])) # libdar.get_version() can be called at will # now defining a very minimalist class to let libdar # interact with the user directly. One could make use # of graphical popup window here if a Graphical User Interface # was used: class myui(libdar.user_interaction): def __init__(self): libdar.user_interaction.__init__(self) # it is mandatory to initialize the parent # class: libdar.user_interaction def inherited_message(self, msg): print("LIBDAR MESSAGE:{0}".format(msg)) # the "LIBDAR MESSAGE" is pure demonstration # to see when output comes through this # user_interaction child class def inherited_pause(self, msg): while True: res = input("LIBDAR QUESTION:{0} y/n ".format(msg)) if res == "y": return True else: if res == "n": return False else: print("answer 'y' or 'n'") def inherited_get_string(self, msg, echo): return input(msg) # we should take care about the boolean value echo # and not show what user type when echo is False def inherited_get_secu_string(self, msg, echo): return input(msg) # we should take care about the boolean value echo # and not show what user type when echo is False # exceptions from libdar (libdar::Egeneric, Erange, ...) in # C++ side are all translated to libdar.darexc class introduced # in the python binding. This class has the __str__() method to # get the message string about the cause of the exception # it get displayed naturally when you don't catch them from # a python shell # here is an example on how to handle libdar.darexc exceptions: try: x = libdar.deci("not an integer") except libdar.darexc as obj: print("libdar exception: {}".format(obj.__str__())) # here follows some helper routines as illustration # on how to manage some libdar data structures # display libdar.statistics (not all fields are shown, see help(libdar.statistics) def display_stats(stats): print("---- stats result ---") print("treated entries = {}".format(stats.get_treated_str())) print("hard link entries = {}".format(stats.get_hard_links_str())) print("%skipped entries = {}".format(stats.get_skipped_str())) print("inode only entries = {}".format(stats.get_inode_only_str())) print("ignored entries = {}".format(stats.get_ignored_str())) print("too old entries = {}".format(stats.get_tooold_str())) print("errored entries = {}".format(stats.get_errored_str())) print("deleted entries = {}".format(stats.get_deleted_str())) print("EA entries = {}".format(stats.get_ea_treated_str())) print("FSA entries = {}".format(stats.get_fsa_treated_str())) print("hard link entries = {}".format(stats.get_hard_links_str())) print("wasted byte amount = {}".format(stats.get_byte_amount_str())) print("total enries = {}".format(i2str(stats.total()))) print("---------------------") # displaying libdar.entree_stats there is a predefined method # as used here that will rely on a libdar.user_interaction to # display the contents. You may also access the different fields # by hand. see help(libdar.entree_stats) for details def display_entree_stats(stats, ui): print("--- archive content stats ---") stats.listing(ui) print("-----------------------------") # for this later structure (libdar.entree_stats) you will probably # want to play with libdar.infinit. I has quite all operation you # can expect on integer (*,/,+,-,*=,+=,-=,/=,^=, >>=, <<=, %=,<,>,==,!=,...) # the libdar.deci() class can be buil from a libdar.infinit or # from a python string and provides two methods: human() and computer() # that return a python string representing the number in base ten # and computer() that returns a libdar.infinint def f0(): x = libdar.infinint("122") # dy = libdar.deci("28") y = dy.computer() # which is equivalent to y = libdar.infinint("28") z = x / y # integer division print("the integer division of {} by {} gives {}".format(libdar.deci(x).human(), dy.human(), libdar.deci(z).human())) # there is also the libdar.euclide(x, y) method that returns # the integer division and rest as a couple of their numerator and divisor # passed in argument: res = libdar.euclide(x, y) print("{} / {} = {} with a remaining of {}".format(libdar.deci(x).human(), dy.human(), libdar.deci(res[0]).human(), libdar.deci(res[1]).human())) # libdar.infinint to string: def i2str(infinint): return libdar.deci(infinint).human() # this is a example of routine that given an open libdar.archive # will provide its listing content. This call is recursive but # free to you to recurse or not upon user event (expanding a directory # in the current display for example). # Note that the method libdar.archive.get_children_in_table returns # a list of object of type libdar.list_entry which has a long # list of methods to provide a very much detailed information # for a given entry in the archive. For more about it, # see help(libdar.list_entry) def list_dir(archive, chem = "", indent = ""): content = archive.get_children_in_table(chem, True) # contents is a list of libdar.list_entry objects for ent in content: ligne = indent if ent.is_eod(): continue if ent.is_hard_linked(): ligne += "*" else: ligne += " " if ent.is_dir(): ligne += "d" else: if ent.is_file(): ligne += "f" else: if ent.is_symlink(): ligne += "l" else: if ent.is_char_device(): ligne += "c" else: if ent.is_block_device(): ligne += "b" else: if ent.is_unix_socket(): ligne += "s" else: if ent.is_named_pipe(): ligne += "p" else: if ent.is_door_inode(): ligne += "D" else: if ent.is_removed_entry(): ligne += "Removed entry which was of of type {}".format(ent.get_removed_type()) else: ligne += "WHAT THIS????" continue ligne += ent.get_perm() + " " + ent.get_name() + " " ligne += ent.get_uid(True) + "/" + ent.get_gid(True) + " " ligne += ent.get_last_modif() print(ligne) # now peparing the recursion when we meet a directory: if ent.is_dir(): if chem != "": nchem = (libdar.path(chem) + ent.get_name()).display() else: nchem = ent.get_name() nindent = indent + " " list_dir(archive, nchem, nindent) # in the following we will provide several functions that # either create, read, test, diff or extract an archive # all will rely on the following global variables: ui = myui() sauv_path = libdar.path(".") arch_name = "arch1" ext = "dar" # let's create an archive. the class # libdar.archive_options_create has a default constructor # that set the options to their default values, the clear() # method can also be used to reset thm to default. # then a bunch of method are provided to modify each of them # according to your needs. See help(libdar.archive_options_create) # and the API reference documentation for their nature and meaning # the libdar.path can be set from a python string but has some # method to pop, add a sub-directory easily. the libdar.path.display() # method provides the representative string of the path def f1(): opt = libdar.archive_options_create() opt.set_info_details(True) opt.set_display_treated(True, False) opt.set_display_finished(True) fsroot = libdar.path("/etc") print("creating the archive") libdar.archive(ui, fsroot, sauv_path, arch_name, ext, opt) # a the difference of C++ here several constructors and method # like op_diff, op_test, etc. where a libdar::statistics *progresive_report # field is present, the python binding has two equivalent methods, one # without this field, and a second with a plain libdar.statistics field. # this later object can be read from another thread while a libdar operation runs # with it given as argument. This let the user see the progression of the # operation (mainly counters on the number of inode # treated, skipped, errored, etc.). More detail in the API reference guide # by the way you will see user interaction in action as we # tend to overwrite the archive created in f1(), assuming you # run f1(), f2().... in order for the demo def f2(): opt = libdar.archive_options_create() opt.set_info_details(True) opt.set_display_treated(False, False) opt.set_display_finished(True) fsroot = libdar.path("/etc") stats = libdar.statistics() print("creating the archive") libdar.archive(ui, fsroot, sauv_path, arch_name, ext, opt, stats) display_stats(stats) # here we read an existing archive. Then first # phase is to create a libdar.archive object # the second is to act upon it. Several actions # can be done in sequence on an existing object # open that way (extracting several time, diff, test, # an do on def f3(): opt = libdar.archive_options_read() opt.set_info_details(True) arch1 = libdar.archive(ui, sauv_path, arch_name, ext, opt); list_dir(arch1) stats = arch1.get_stats() display_entree_stats(stats, ui) # below we will play with mask. Most operation have to # operations to filter the file they will apply on. Then # first "set_selection()" applies to filenames only # the second 'set_subtree()" applies the whole path instead # What type of libdar.mask() you setup for these is # completely free. Pay attention that when a directory # is excluded (by mean of set_subtree()) all its content # and recursively all is subdirectories are skipped. # the list of class inheriting from libdar.mask() are: # - bool_mask(bool) either always true or always false # - libdar.simple_mask(string) the provided string is read as a glob # expression, which is the syntax most shell use like bash # - libdar.regex_mask(string) the argument is read as a # regular expression # - simple_path_mask(string) matches if the string to # compare to is a subdir of the string provided to the # constructor, or if this string is a subdr of the string # to compare to. This is mostly adapted to select a # given directory for an operation, as all the path leading # to it must match and all subdirectory in that directory # must also match. # - same_path_mask(string) matches only the given # argument. This is intended for directory pruning # # most mask have in fact a second argument in their # constructor (a boolean) that define whether the mask # is case sensitive (True) or not (False) # - not_mask(mask) gives the negation of the mask # provided in argument # - et_mask() + add_mask(mask) + add_mask(mask) +... # makes a logical AND between the added masks # - ou_mask() + add_mask(mask) + add_mask(mask) +... # makes a logical OR between the added masks # why this French "et" and "ou" words? because at that # time they were added this code was internal to dar # and I frequently use French words to designate my # own datastructure to differentiate with English symbols # brought from outside. This code has not change since then # so is the reason. # of course you can add_mask() a ou_mask(), a not_mask() # or yet a et_mask() recursively at will and make arbitrarily complex # masks mixing them with simple_mask(), regular_mask(), and so on. def f4(): opt = libdar.archive_options_read() arch1 = libdar.archive(ui, sauv_path, arch_name, ext, opt); opt = libdar.archive_options_test() # defining which file to test bases on filename (set_selection) mask_file1 = libdar.simple_mask("*.*", True) mask_file2 = libdar.regular_mask(".*\.pub$", True) mask_filenames = libdar.ou_mask() # doing the logical OR between what we will add to it: mask_filenames.add_mask(mask_file1) mask_filenames.add_mask(mask_file2) opt.set_selection(mask_filenames) # reducing the testing in subdirectories tree1 = libdar.simple_path_mask("/etc/ssh", False) tree2 = libdar.simple_path_mask("/etc/grub.d", False) tree = libdar.et_mask() # doing the loical AND betwen what we will add to it: tree.add_mask(libdar.not_mask(tree1)) tree.add_mask(libdar.not_mask(tree2)) opt.set_subtree(tree) opt.set_info_details(True) opt.set_display_skipped(True) arch1.op_test(opt) # nothing much more different as previously # except that we compare the archive with # the filesystem (op_diff) while we tested the # archive coherence previously (op_test) def f5(): opt = libdar.archive_options_read() arch1 = libdar.archive(ui, sauv_path, arch_name, ext, opt); tree1 = libdar.simple_path_mask("/etc/ssh", False) tree2 = libdar.simple_path_mask("/etc/grub.d", False) tree = libdar.ou_mask() tree.add_mask(tree1) tree.add_mask(tree2) opt = libdar.archive_options_diff() opt.set_subtree(tree) opt.set_info_details(True) opt.set_display_treated(True, False) opt.set_ea_mask(libdar.bool_mask(True)) opt.set_furtive_read_mode(False) arch1.op_diff(libdar.path("/etc"), opt) rest = libdar.path("./Restore") try: os.rmdir(rest.display()) except: pass os.mkdir(rest.display()) opt = libdar.archive_options_extract() # the overwriting policy can receive # objects from many different crit_action_* classes # - crit_constant_action() used here does always the same # action on Data and EA+FSA when a conflict arise that # would lead to overwriting # - testing(criterium) the action depends on the evaluation # of the provided criterium (see below) # - crit_chain() + add(crit_action) performs the different # crit_actions added in sequence the first one that provides # an action for Data and/or EA+FSA is retained. If no action # is left undefined the following crit_action of the chain are # not evaluated # # for the testing crit_action inherited class, we need to provide # a criterium object. Here too there is a set of inherited classes # that come to help: # - crit_in_place_is_inode # - crit_in_place_is_dir # - crit_in_place_is_file # - ... # - crit_not (to take the negation of the given criterium) # - crit_or + add_crit() + add_crit() ... makes the logical OR # - crit_and + add_crit() + add_crit()... for the logical AND # - crit_invert for the in_place/to_be_added inversion # Read the manual page about overwriting policy for details # but in substance the criterum return true of false for each # file in conflict and the object if class testing that uses # this criterium applies the action given as "go_true" or the # action given as "go_false" in regard of the provided result over_policy = libdar.crit_constant_action(libdar.over_action_data.data_preserve, libdar.over_action_ea.EA_preserve) opt.set_overwriting_rules(over_policy) # fsa_scope is a std::set in C++ side and translates to a # python set on python side. Use the add() method to add # values to the set: fsa_scope = set() fsa_scope.add(libdar.fsa_family.fsaf_hfs_plus) fsa_scope.add(libdar.fsa_family.fsaf_linux_extX) opt.set_fsa_scope(fsa_scope) stats = libdar.statistics() arch1.op_extract(rest, opt, stats) display_stats(stats) # last, all operation that interact with filesystem use by default # a libdar.entrepot_local object (provided by the archive_options_* # object, this makes the archive written and read from local filesystem. # However you can replace this entrepot by an object of class # libdar.libcurl_entrepot to read or write an archive over the network # directly from libdar by mean of FTP of SFTP protocols. Follows an # illustration of this possibility: def f6(): opt = libdar.archive_options_read() passwd ="joe@the.shmoe" secu_pass = libdar.secu_string(passwd, len(passwd)) entrepot = libdar.entrepot_libcurl(ui, libdar.mycurl_protocol.proto_ftp, "anonymous", secu_pass, "ftp.edrusb.org", "", False, "", "", "", 5) print(entrepot.get_url()) opt.set_entrepot(entrepot) opt.set_info_details(True) arch2 = libdar.archive(ui, libdar.path("/dar.linux.free.fr/Python_tutorial"), "example", "dar", opt) opt2 = libdar.archive_options_test() opt2.set_display_treated(True, False) arch2.op_test(opt2) # other classes of interest: # - libdar.database for the dar_manager featues # - libdar.libdar_xform for the dar_xform features # - libdar.libdar_slave for the dar_slave features # they are all three accessible from python and follow # very closely the C++ syntax and usage # thanks to refer to the API documentation or to the # C++ tutorial dar-2.7.17/doc/restoration-with-dar.html0000644000175000017520000021712114740171677015044 00000000000000 Flexible Restoration with dar
    Dar Documentation

    Flexibly Restoring a whole system with dar

    Introduction

    Restoration is usually the most tricky part of a backup process. The backup process designs the whole process of creating backups, storing them in a secured place, protecting backup data against unauthorized access, against corruption over time, rebuilding a whole system from scratch upon major failure, system corruption, security breach,... It may concern a single system (a host with its operating system, applications, configurations, user data), a set of systems independent systems, but also "recursive systems" like an hypervisors and their many virtual possible machines. (we will illustrate that later case also in this document).

    The second purpose of a backup process is to provide file history, in order to be able to restore a file deleted by mistake (even long after the mistake was made), corrupted, or to get back this file(s) in the state it had for a previous application version, which succeed when a software upgrade brakes a legacy feature you need more than the new features.

    But there is not a single backup process that matches the need of all. For example, syncing your local data in the cloud is easy and may be suitable for personal use (well, depending on your privacy level of consideration...). But as it also exposes all your data, values, proprietary software, patents, to the eye of the cloud provider, it may thus not be suitable for companies having production secrets, secret recipes that constitute the source of their revenue. It may neither be suitable for a individual fighting for human rights and for freedom in a country where these natural rights are banished. And last, it does not let you rebuild your whole system: Saving only your documents will have allow you to reinstall all applications and their particular configurations you had adapted to your needs over time, as well as eventually finding or rebuying the license keys to active the proprietary software you were using.

    At the opposite, restoring a whole system with not only the user data but also the application binaries, configurations, operating system,... in the state it had at the time of the backup requires some skills and knowledge. The objective of this document is to provide some tested recipies to help anyone new to this operation, using Disk ARchive (dar) as backup tool under Linux and more generally Unixes (including MaCOS X).

    Some notes about Dar software:
    At the opposite of backup tools that copy bytes verbatim from the disk to a file, dar keeps traces of files inside the file-system, it stores every possible thing related to these files (metadata, attributes, Extended Attributes, data, sparse nature of files).

    The advantages are:

    • much less data (not the free block space of a file-system),
    • can perform differential and incremental backups (so the new backup is very small and processing time very fast),
    • can even use rsync-based binary delta per saved file between two backups (so you do not re-save a whole large binary file when it has changed),
    • can restore on any disk size (if large enough to hold the data),
    • change the partition layout,
    • use a different file-system,
    • can compress specific files and avoid trying compressing some others during the backup process
    • use strong encryption efficiently and in the state of the art
    • the backup format is very robust against corruption and can both detect corruption affecting a file and still recover the rest of the backup
    • the use of parity data can be integrated with dar (thanks to Parchive software as we will show here), leading to the ability to repair the backup
    • and so on.

    The drawbacks are that you will have to manually recreate the disk partitions and format the file-system as you want, in order to restore files into them. The objective of this document is thus to explain how to do that and let you see that this task is not complex and brings a lot of freedom. In the second part of this document, the variation will let you see what changes when considering LVM, LUKS and a Proxmox VE hypervisor.

    Backup creation

    What to backup

    Do I have to backup Everything? Well, in fact no. You can exclude all virtual file-systems like /dev /proc and /sys (see dar's -P option) as well as any temporary and cache directory (/tmp /var/tmp /var/cache/apt/archives /var/backups /home/*cache*/*...) and the directories named "lost+found" that will be recreated at restoration time while formatting the target file-system. If you use LVM to store you system, you might be interested just for further reference, in recording within the backup the output of the lsblk command, that gives the current partitions, Virtual Group name, Logical Volume names and their usage in the running system at the time of the backup (see -< and -= options, below).

    Here is an example of configuration used on a Proxmox system (Debian based kvm hypervisor). For more details refer to the man page, but in summary here is the options used and their meaning:

    -R option
    Defines the root of the data to backup. Here the backup scope is system wide so we give it "/" as argument
    -am
    Let using the ordered and natural mask combinaison
    -D
    When excluding a directory (like /sys for example) store the directory as empty in the backup, this way at restoration time the mount-point will be recreated
    -P
    prunes the directory given in argument (which is relative to the -R root, so -P dev excludes /dev here). It can be used multiple times.
    -g
    derogate to a previous -P option by including the directory given in argument
    -z/-zbzip2
    compress the backup here with bzip2 algorithm
    -s
    split backup in several files (called slices) to avoid having a possibily huge file
    -B
    includes other options defined in the file given in argument
    compress-exclusion
    is a set of options (a so called "target") defined in /etc/darrc that provides a long list of files types that do not worth trying to compress (already compressed files, for example)
    no-emacs-backup
    is another target avoiding to save emacs temporary backup files
    bell
    yet another target still defined in /etc/darrc that makes the terminal ring upon user interaction request
    -E
    execute the provided command after each created slice, here we run a script that lead par2 to generate parity data for each slice
    --slice-mode
    defines the permission of the backup slices that will be created
    --retry-on-change
    as we perform the backup of a live system, we need retry saving up to 3 times any file that changed at the time it was read for backup.
    -<
    when entering the /root directory execute the command provided with -= option
    -=
    execute the provided command when saving a directory or file referred by -< option

    As the backup part of the process is recurrent, it is suitable to drop all these options in a configuration file (here /root/.darrc for those option to be used by default):

    root@soleil:~# cat .darrc all: -R / create: -am -D -P dev -P run -P sys -P proc -P tmp -P var/lib/vz # this is where proxmox stores VM backups so we save the directory: -g var/lib/vz/dump -P var/lib/lxcfs -P var/cache/apt/archives -P etc/pve -P var/backups -P lost+found -P */lost+found -P root/tmp -P mnt -zbzip2 -s 1G --nodump --cache-directory-tagging -B /etc/darrc compress-exclusion no-emacs-backup bell # will calculate the parity file of each generated slices -E "/usr/share/dar/samples/dar_par_create.duc %p %b %N %e %c 1" --slice-mode 0640 --retry-on-change 3:1024000 # when entering the /root directory, dar will run lsblk and store its # output into /root/lsblk.txt then this file will be part of the backup # as we have not excluded it (by mean of -P, -X, -] and similar options) -< root -= "/bin/lsblk > %p/lsblk.txt"

    Dar_static

    We will copy dar_static binary beside the backup to not rely on anything else for restoration. Some user also add a bit of dar documentation (including this document), that's up to you to decide.

    Ciphering

    If backup has to be ciphered (-K option), better use symmetric encryption algorithm, than assymmetrical: For the first, you will be asked for the passphrases to decipher the backup and restore your data, while with asymmetrical encryption, this is the private key and the knowledge of the passphrase used to unlock it (if used) that will be needed. In consequences this needed information --- the private key --- must be stored outside the backup (in your head for a passphrase, or in a unciphered removable media for a private key, for example).

    Ciphering backups becomes necessary when using a public cloud provider to store them, or by coherence, when your system itslef is stored on ciphered volumes (LUKS for example).

    Direct Attached Storage (DAS)

    For direct attached storage (DAS), like local disk, key, or legacy DVDs, there is no difficulties. You will probably want to adapt the -s/-S options to a divisor of the media size, eventually adding parity data for when low end media are used. (just add the word par2 on command-line or in .darrc)

    Network Attached Storage (NAS)

    Of course a network access need to be setup before being able to restore your data. The rescue system must also support one of the network protocols available with your NAS to access your backups. For protocols other than FTP and SFTP, a temporary local storage may be needed and thus slicing dar backups (see -s option) will be very useful to be able to perform a restoration without requiring an very large temporary local disk. In addition you can automate the downloading of slices from dar by mean of -E option. But, when using FTP or SFTP, dar can read the backup directly from the NAS and thus absolutely no local temporary storage is required for restoration in that case.

    Partitions

    Dar is partitions independent so we will have to recreate them before restoration starts: At no time you have to recreate the exactly same layout of partitions: if you know some partitions were nearly saturated or oversized, you can take the opportunity of the restoration to review the partition sizes, or even reconsider a completely different partition/disk layout (for example, splitting /var from / in a separated partition for example, or putting some partitions together if it makes better sense), or go to encrypted LUKS disks, LVM, and so on.

    UEFI Boot

    UEFI boot uses an EFI partition (vfat formatted) where are stored binaries for the different operating systems present in the host. This partition is only used before the Linux system is started but it is mounted afterward under /boot/efi when the system has booted, so it can be saved by dar without any effort. We will see a little trick about EFI partition at restoration time.

    Legacy MBR boot

    Without UEFI, you stick to the legacy MBR boot process, but there is nothing too complicated here: it will just be necessary to re-install the boot loader from the restored system, we will describe that too.

    Restoration Process

    Booting a pristine system

    So you have done and tested your backup as usually and today you need to restore them on a brand-new computer. The proposition is to use System-rescueCD for that. Do not be confused by this name, it can make bootabe CD/DVD, but also bootable USB keys. Knoppix is also an good alternative.

    Once systemRescueCD has booted, you get to a shell prompt ready to interpret your commands. For those not having US native keyboards, you can change its layout thanks to the loadkeys command if you skipped the prompt that let you select it:

    [root@sysresccd ~]# loadkeys fr [root@sysresccd ~]#

    Accessing the backup from the host

    In the following we will detail three different ways to access the backup, choose the one best suits your context:

    • Direct Access Storage (DAS)
    • Network Access Storage (NAS) without FTP or SFTP protocols
    • Network Access Storage (NAS) accessed using FTP or SFTP protocols

    Accessing the backup (DAS context)

    In the case of DAS (locak disk, tape, usb key/disk, CD/DVD, floppy(!),...), we can use lsblk to identify the backup partition and or LVM volume. Then we can mount it

    [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# cd /mnt [root@sysresccd /mnt]# mkdir Backup [root@sysresccd /mnt]# mount /dev/sdb1 Backup [root@sysresccd /mnt]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt]#

    Creating a local temporary storage (NAS context without (S)FTP access)

    In the case of Network Storage (NAS) without FTP or SFTP protocol support, we need a local temporary file-system (removed at the end of the restoration process). Here we use lsblk to list all disks, then gdisk to create partition, mkfs to format the file-system and mount it to have it ready for use.

    In the below example we use a 32 GB USB key for temporary storage:

    [root@sysresccd ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries in memory. Command (? for help): n Partition number (1-128, default 1): First sector (34-67108830, default = 2048) or {+-}size{KMGTP}: Last sector (2048-67108830, default = 67108830) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): p Disk /dev/sdb: 67108864 sectors, 32.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): 89112323-E1B3-42D7-BB61-8084C1D359F9 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 67108830 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 67108830 32.0 GiB 8300 Linux filesystem Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sdb. The operation has completed successfully. [root@sysresccd /mnt]# mkfs.ext4 /dev/sdb1 mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 8388347 4k blocks and 2097152 inodes Filesystem UUID: c7ee69b8-89f4-4ae3-92cb-b0a9e41a5fa8 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd ~]# cd /mnt [root@sysresccd /mnt]# mkdir Backup [root@sysresccd /mnt]# mount /dev/sdb1 Backup [root@sysresccd /mnt]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt]#

    you can now fetch each slice dar would request and drop them into that temporary /mnt/Backup directory, removin them afterward. Dar's -E option may be of some use it to automate the process. Assuming you use scp to fetch the slices, you could use the following to instruct dar where to obtain the slices from (for http or https you could usr curl to do something equivalent):

    [root@sysresccd /mnt]# cat ~/.darrc <<EOF -E "rm -f /mnt/Backup/%b.*.%e ; scp user@backup-host:/some/where/%b.%N.%e /mnt/Backup" EOF [root@sysresccd /mnt]#

    Not that dar will initially require slice number zero, meaning the last slice of the backup, you can make complicated script to handle that, but you can also easily cope with that by manually downloading the last slice in /mnt/Backup, before starting the restoration. dar will find it and will not require it anymore.

    If you do not have or want to use a disk for this temporary storage, you can rely on your host memory thanks to a tmpfs file-system:

    [root@sysresccd /mnt]# mkdir /mnt/Backup [root@sysresccd /mnt]# mount -t tmpfs -o size=2G tmpfs /mnt/Backup [root@sysresccd /mnt]#

    NAS with FTP or SFTP

    During the system-rescueCD boot process, you have been asked to provide network information, so we assume you did well and this volatile system has an operational network access (DHCP or not does not matter at this step, whatever is the network configuration of the system we are restoring). If you plan to use FTP or SFTP embedded within dar you do not need to prepare any local temporary storage, just remains the network access to the NAS to validate:

    [root@sysresccd ~]# ping 192.168.6.6 PING 192.168.6.6 (192.168.6.6) 56(84) bytes of data. 64 bytes from 192.168.6.6: icmp_seq=1 ttl=64 time=1.33 ms 64 bytes from 192.168.6.6: icmp_seq=2 ttl=64 time=0.667 ms ^C --- 192.168.6.6 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 3ms rtt min/avg/max/mdev = 0.667/0.999/1.332/0.334 ms [root@sysresccd ~]#

    It is also possible to validate the FTP or SFTP access availability using the associated credentials with the CLI ftp or sftp command.

    Preparing partitions

    As stated above, you have a total freedom to create the same or a different partition layout, it will not reduce or impact the ability to restore with dar. This may be the opportunity to use LVM of RAID or SAN, LUKS ciphered volume, or at the opposite to get back to a plain old partition. That's up to you to decide. In the following we will first use plain partition with UEFI boot (and MBR boot), then in the variations part of this document, we will revisit the process using LVM and UEFI, then again with even more stuff: LUKS, LVM and UEFI all at the same time.

    the EFI partition

    To boot in UEFI a small EFI partition has to be created and vfat formatted. Here we used a size of 1 MiB which is large enough for a single Linux boot host (using grub), but you can find it having sometimes a size of 512 MiB.

    [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries in memory. Command (? for help): n Partition number (1-128, default 1): First sector (34-209715166, default = 2048) or {+-}size{KMGTP}: Last sector (2048-209715166, default = 209715166) or {+-}size{KMGTP}: 4095 Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): ef00 Changed type of partition to 'EFI System' Command (? for help): p Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 209713085 sectors (100.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk `-sda1 8:1 0 1M 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

    The root partition

    Here we will use a single partition to restore the system to, but you are free to use as many as you want (and also use LVM instead of partitions if you prefer. See the variations part at the end of this document).

    [root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): n Partition number (2-128, default 2): First sector (34-209715166, default = 4096) or {+-}size{KMGTP}: Last sector (4096-209715166, default = 209715166) or {+-}size{KMGTP}: +80G Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): p Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 41940925 sectors (20.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part `-sda2 8:2 0 80G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

    A swap space

    It is always a good idea to have a swap space, either as a swap file or better, as one or several swap partitions (not especially a big one, depending on your needs). Follows the creation of a 1 GiB swap partition:

    [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part `-sda2 8:2 0 80G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): n Partition number (3-128, default 3): First sector (34-209715166, default = 167776256) or {+-}size{KMGTP}: Last sector (167776256-209715166, default = 209715166) or {+-}size{KMGTP}: +1G Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8200 Changed type of partition to 'Linux swap' Command (? for help): p Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 39843773 sectors (19.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem 3 167776256 169873407 1024.0 MiB 8200 Linux swap Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 80G 0 part `-sda3 8:3 0 1G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

    Formatting File-systems

    swap partition

    In order to be usable we have to format all the partitions we just created, let's start with the swap partition:

    [root@sysresccd ~]# gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 39843773 sectors (19.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem 3 167776256 169873407 1024.0 MiB 8200 Linux swap [root@sysresccd ~]# mkswap /dev/sda3 Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=51f75caa-6cf3-421f-a18a-c58e77f61795 [root@sysresccd ~]#

    In option we can even use this swap partition right now for the current rescue system, this may be interesting especially if you used a tmpfs file-system as temporary local storage:

    [root@sysresccd ~]# free total used free shared buff/cache available Mem: 8165684 84432 139112 95864 7942140 7680632 Swap: 0 0 0 [root@sysresccd ~]# swapon /dev/sda3 [root@sysresccd ~]# free total used free shared buff/cache available Mem: 8165684 85372 137976 95864 7942336 7679804 Swap: 1048572 0 1048572 [root@sysresccd ~]#

    Root file-system

    Nothing tricky here:

    [root@sysresccd ~]# gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 39843773 sectors (19.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem 3 167776256 169873407 1024.0 MiB 8200 Linux swap [root@sysresccd ~]# mkfs.ext4 /dev/sda2 mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 20971520 4k blocks and 5242880 inodes Filesystem UUID: ec6319f3-789f-433d-a983-01d577e3e862 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000 Allocating group tables: done Writing inode tables: done Creating journal (131072 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd ~]#

    We will mount this partition to be able to restore data into it:

    [root@sysresccd ~]# mkdir /mnt/R [root@sysresccd ~]# mount /dev/sda2 /mnt/R [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 80G 0 part /mnt/R `-sda3 8:3 0 1G 0 part [SWAP] sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

    EFI Partition

    the EFI partition is a vfat partition that is usually mounted under /boot/efi after the system has booted. So we will format it and mount it there under /mnt/R, where we have temporarily mounted the future root file-system.

    If you use the legacy MBR booting process in your original system, you just have to skip this EFI partition step: when reinstalling grub, the MBR will be setup as expected.

    [root@sysresccd ~]# mkfs.vfat -n UEFI /dev/sda1 mkfs.fat 4.1 (2017-01-24) [root@sysresccd ~]# cd /mnt/R [root@sysresccd /mnt/R]# mkdir -p boot/efi [root@sysresccd /mnt/R]# mount /dev/sda1 boot/efi [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi |-sda2 8:2 0 80G 0 part /mnt/R `-sda3 8:3 0 1G 0 part [SWAP] sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

    Restoring data with dar

    All is ready to receive the data, so we run dar, here below in the case of a DAS or NAS without (S)FTP protocols:

    root@sysresccd ~]# cd /mnt/Backup root@sysresccd /mnt/Backup]# ls -al total 3948 drwxr-xr-x 4 root root 4096 Oct 4 14:49 . drwxr-xr-x 1 root root 80 Oct 4 15:35 .. -rwxr-xr-x 1 root root 4017928 Oct 4 14:49 dar_static drwx------ 2 root root 16384 Oct 4 13:49 lost+found drwxr-xr-x 2 root root 4096 Oct 4 15:00 soleil-full-2020-09-16 [root@sysresccd /mnt/Backup]# ./dar_static -x soleil-full-2020-09-16/soleil-full-2020-09-16 -R /mnt/R -X "lost+found" -w Archive soleil-full-2020-09-16 requires a password: Warning, the archive soleil-full-2020-09-16 has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted -------------------------------------------- 62845 inode(s) restored including 11 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 62845 -------------------------------------------- EA restored for 1 inode(s) FSA restored for 0 inode(s) -------------------------------------------- [root@sysresccd /mnt/Backup]#

    For a NAS with SFTP or FTP this is even simpler, though we have to download dar_static first

    [root@sysresccd ~]# scp denis@192.168.6.6:/mnt/Backup/dar_static . The authenticity of host '192.168.6.6 (192.168.6.6)' can't be established. ECDSA key fingerprint is SHA256:6l+YisP2V2l82LWXvWb1DFFYEkzxRex6xmSoY/KY2YU. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.6.6' (ECDSA) to the list of known hosts. denis@192.168.6.6's password: dar_static 100% 3924KB 72.7MB/s 00:00 [root@sysresccd ~]# ./dar_static -x sftp://denis@192.168.6.6/mnt/Backup/Soleil/soleil-full-2020-09-16/soleil-full-2020-09-16 -R /mnt/R -X "lost+found" -w Please provide the password for login denis at host 192.168.6.6: Archive soleil-full-2020-09-16 requires a password: Warning, the archive soleil-full-2020-09-16 has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted -------------------------------------------- 62845 inode(s) restored including 11 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 62845 -------------------------------------------- EA restored for 1 inode(s) FSA restored for 0 inode(s) -------------------------------------------- [root@sysresccd /mnt/Backup]#

    Adaptation of the restored data

    The UUID of the different filesystem and swap space have been recreated, if the restored /etc/fstab points to file-system based on their UUID we have to adapt it to their new UUID. The blkid let you grab the UUID of file-system we created including the swap partition, so we can edit /mnt/R/etc/fstab (using vi or joe both available from systemrescueCD).

    If your system is booting by mean of an initramfs, you should also check and eventually edit the restored /mnt/R/etc/initramfs-tools/conf.d/resume with the new UUID of the swap partition.

    Note: that we can also look for the original UUID and when creating filesystems (formating them) provide the same UUID as the one used on the backed up system for each of them. This implies you have saved the information provided by blkid within the backup. See -i option of mkfs program to provide the UUID the filesystem should be created with. Both methods are valid, the later does not then require to adapt the restored data.

    If, like me, you like none of these editors but prefer emacs for example for its ability to run an embedded shell and copy&past between the shell running blkid and the fstab file you are editing, assuming you have it ready for use in the system under restoration, you can delay this edition of fstab to the time we will have chrooted, see below.

    Note that the root file-system UUID has no importance as we will regenerate the ramdisk and grub configuration file based on its new UUID. However if you have more partitions than the few we had in this example, /mnt/R/etc/fstab should be updated with their new UUID or /dev/ path accordingly

    [root@sysresccd ~]# blkid /dev/sda1: SEC_TYPE="msdos" LABEL_FATBOOT="UEFI" LABEL="UEFI" UUID="CB52-4920" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="edb894df-e58f-4590-a167-bf5b9025a691" /dev/sda2: UUID="ec6319f3-789f-433d-a983-01d577e3e862" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="8f707306-e1b5-4019-aabb-0d39da9057be" /dev/sda3: UUID="51f75caa-6cf3-421f-a18a-c58e77f61795" TYPE="swap" PARTLABEL="Linux swap" PARTUUID="d0e52f52-3cd3-4396-8e03-972d9f76af49" /dev/sdb1: UUID="c7ee69b8-89f4-4ae3-92cb-b0a9e41a5fa8" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="15e0fb22-7de7-487c-8a68-ecaa2bb19dd0" /dev/sr0: UUID="2019-04-14-11-35-22-00" LABEL="SYSRCD603" TYPE="iso9660" PTUUID="0d4f1b4a" PTTYPE="dos" /dev/loop0: TYPE="squashfs" [root@sysresccd ~]# vi /mnt/R/etc/fstab [root@sysresccd ~]# vi /mnt/R/etc/initramfs-tools/conf.d/resume

    Let's now reinstall the boot loader (grub in our case). To achieve this goal we will chroot into /mnt/R, but as in this chrooted environement we will also need to access the /dev /proc and /sys and if using UEFI boot, the /sys/firmware/efi/efivars file-system we will bind-mount those inside /mnt/R:

    [root@sysresccd ~]# mount proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) dev on /dev type devtmpfs (rw,nosuid,relatime,size=4060004k,nr_inodes=1015001,mode=755) run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755) efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) /dev/sr0 on /run/archiso/bootmnt type iso9660 (ro,relatime,nojoliet,check=s,map=n,blocksize=2048) cowspace on /run/archiso/cowspace type tmpfs (rw,relatime,size=262144k,mode=755) /dev/loop0 on /run/archiso/sfs/airootfs type squashfs (ro,relatime) airootfs on / type overlay (rw,relatime,lowerdir=/run/archiso/sfs/airootfs,upperdir=/run/archiso/cowspace/persistent_SYSRCD603/x86_64/upperdir,workdir=/run/archiso/cowspace/persistent_SYSRCD603/x86_64/workdir,index=off) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) configfs on /sys/kernel/config type configfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) tmpfs on /etc/pacman.d/gnupg type tmpfs (rw,relatime,mode=755) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=816568k,mode=700) /dev/sdb1 on /mnt/Backup type ext4 (rw,relatime) /dev/sda2 on /mnt/R type ext4 (rw,relatime) /dev/sda1 on /mnt/R/boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro) [root@sysresccd ~]#[root@sysresccd ~]# cd /mnt/R [root@sysresccd /mnt/R]# mount --bind /proc proc [root@sysresccd /mnt/R]# mount --bind /sys sys [root@sysresccd /mnt/R]# mount --bind /dev dev [root@sysresccd /mnt/R]# mount --bind /run run [root@sysresccd /mnt/R]# mount --bind /sys/firmware/efi/efivars sys/firmware/efi/efivars [root@sysresccd /mnt/R]# chroot . /bin/bash root@sysresccd:/#

    If not done previously you can now edit /etc/fstab with your favorite text editor available in the system under restoration. Then we can reinstall grub and rebuild the initram (if used), and exit the chrooted environment.

    root@sysresccd:/# export PATH=/sbin:/usr/sbin:/bin:$PATH root@sysresccd:/#update-initramfs -u update-initramfs: Generating /boot/initrd.img-4.15.18-21-pve root@sysresccd:/# update-grub Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.15.18-21-pve Found initrd image: /boot/initrd.img-4.15.18-21-pve Found memtest86+ image: /boot/memtest86+.bin Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin done root@sysresccd:/# grub-install Installing for x86_64-efi platform. Installation finished. No error reported. root@sysresccd:~# exit exit [root@sysresccd /mnt/R]#

    If you get the following warning when running update-grub, you probably missed to bind-mount /run as described in the previous paragraph:

    WARNING: Device /dev/XYZ not initialized in udev database even after waiting 10000000 microseconds.

    Checking the motherboard when rebooting

    You can restart the system now and remove the systemrescueCD boot device we used for the restoration process.

    root@sysresccd /mnt/R]# shutdown -r now

    At the first boot, make a halt in the "BIOS" (Press F2 "F9" or "Del" key depending on the hardware) to check that the motherboard points to the correct binary inside the EFI partition of the hard disk, or if using MBR booting process instead, check that the hard disk is in a correct place of boot device list.

    Networking Interfaces

    Now that the system is back running, the network interface name may have changed depending on the nature of the new hardware. You may have to edit /etc/network/interfaces or equivalent configuration file (/etc/sysconfig/network-scripts/...) if not using automatic tools like network-manager and DHCP protocol for example.



    THIS ENDS THE RESTORATION PROCESS. WE WILL NOW SEE SOME VARIATIONS OF THIS PROCESS FOR SOME MORE SPECIFIC CONTEXTS.




    Restoring to LVM volumes

    You might prefer especially when using Proxmox Virtual Environment to restore to an LVM file-system, having a Logical Volume for the root file-system (the proxmox system) and its swap partition and allocating the rest of the space to a thin-pool for VM to have their block storage.

    Note that if you save the proxmox VE as a a normal Debian system, this is fine, but this will not save the VM and containers you had running under Proxmox. However you can save the /var/lib/vz/dump directory where resides the backups of your VM. This assumes you have scheduled a backup process within proxmox VE for these VMs and containers.

    Creating partitions and Logical Volumes

    Compared to the previous restoration steps, what changes is that you will create only two partitions, the EFI partition and a LVM partition:

    [root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): n Partition number (1-128, default 1): First sector (34-209715166, default = 2048) or {+-}size{KMGTP}: Last sector (2048-209715166, default = 209715166) or {+-}size{KMGTP}: 4095 Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): ef00 Changed type of partition to 'EFI System'
    Command (? for help): n Partition number (2-128, default 2): First sector (34-209715166, default = 4096) or {+-}size{KMGTP}: Last sector (4096-209715166, default = 209715166) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8e00 Changed type of partition to 'Linux LVM'
    Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 209715166 100.0 GiB 8E00 Linux LVM [root@sysresccd ~]#

    formatting the partitions and volumes

    The formatting of the EFI partition has been seen, so we will not detail it here, but it must be done now, in order for the following steps to succeed.

    Remains the LVM related stuff to setup:

    • Physical Volume
    • Virtual Groups
    • Logical Volume (which corresponds to the partitions we created in the non LVM context)
    • format these volumes as we did for partitions
    [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part `-sda2 8:2 0 100G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# pvcreate /dev/sda2 Physical volume "/dev/sda2" successfully created. [root@sysresccd ~]# vgcreate soleil /dev/sda2 Volume group "soleil" successfully created [root@sysresccd ~]# lvcreate -L 9G soleil -n rootfs Logical volume "rootfs" created. [root@sysresccd ~]# lvcreate -L 1G soleil -n swap Logical volume "swap" created. [root@sysresccd ~]# mkswap /dev/mapper/soleil-swap Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=8aa8e971-3aea-4357-8723-dbc9392bacf8 [root@sysresccd ~]# swapon /dev/mapper/soleil-swap [root@sysresccd ~]# mkfs.ext4 /dev/mapper/soleil-rootfs mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 2359296 4k blocks and 589824 inodes Filesystem UUID: 65561197-1e85-498d-9127-bb8f4bc142ac Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd ~]#

    Now that all partitions are created as previously, we can mount them to get ready for dar restoration:

    [root@sysresccd ~]# cd /mnt [root@sysresccd /mnt]# mkdir R [root@sysresccd /mnt]# mount /dev/mapper/soleil-rootfs R [root@sysresccd /mnt]# cd R [root@sysresccd /mnt/R]# mkdir -p boot/efi [root@sysresccd /mnt/R]# mount /dev/sda1 boot/efi [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi `-sda2 8:2 0 100G 0 part |-soleil-rootfs 254:0 0 9G 0 lvm /mnt/R `-soleil-swap 254:1 0 1G 0 lvm [SWAP] sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

    Restoring the data with dar

    By default in proxmox the /var/liv/vz is in the root filesystem. we could restore as described above, but it may also be interesting to do else: by creating a thin-pool and using a thin-volume inside it for /var/lib/vz in order to not saturate the proxmox system with backups while not dedicating a whole partition for it, but sharing this space with VM volumes.

    Creating a thin-pool

    Creating a thin pool is done in three steps.

    • create a small Logical Volume for metadata
    • create a large Logical Volume for data
    • convert both Volumes as a thin-pool

    [root@sysresccd /mnt/R]# lvcreate -n metadata -L 300M soleil Logical volume "metadata" created. [root@sysresccd /mnt/R]# lvcreate -n pooldata -L 80G soleil Logical volume "pooldata" created. [root@sysresccd /mnt/R]# lvconvert --type thin-pool --poolmetadata soleil/metadata soleil/pooldata Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. WARNING: Converting soleil/pooldata and soleil/metadata to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert soleil/pooldata and soleil/metadata? [y/n]: y Converted soleil/pooldata and soleil/metadata to thin pool. [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi `-sda2 8:2 0 100G 0 part |-soleil-rootfs 254:0 0 9G 0 lvm /mnt/R |-soleil-swap 254:1 0 1G 0 lvm [SWAP] |-soleil-pooldata_tmeta 254:2 0 300M 0 lvm | `-soleil-pooldata 254:4 0 80G 0 lvm `-soleil-pooldata_tdata 254:3 0 80G 0 lvm `-soleil-pooldata 254:4 0 80G 0 lvm sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

    Using the thin-pool for /var/lib/vz

    The thin-pool is created we can thus now use it to create a Virtual Logical Volume, in other words a volume that consumes of the thin-pool data only what it really needs, sharing its free space with other thin volumes of the this thin-pool (see also discard directive while mounting file-systems or the fstrim system command).

    [root@sysresccd /mnt/R]# lvcreate -n vz -V 20G --thinpool pooldata soleil Logical volume "vz" created. [root@sysresccd /mnt/R]# mkfs.ext4 /dev/mapper/soleil-vz mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 5242880 4k blocks and 1310720 inodes Filesystem UUID: a2284c87-a0c9-419f-ba19-19cb5df46d4a Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd /mnt/R]# mkdir -p var/lib/vz [root@sysresccd /mnt/R]# mount /dev/mapper/soleil-vz var/lib/vz [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi `-sda2 8:2 0 100G 0 part |-soleil-rootfs 254:0 0 9G 0 lvm /mnt/R |-soleil-swap 254:1 0 1G 0 lvm [SWAP] |-soleil-pooldata_tmeta 254:2 0 300M 0 lvm | `-soleil-pooldata-tpool 254:4 0 80G 0 lvm | |-soleil-pooldata 254:5 0 80G 0 lvm | `-soleil-vz 254:6 0 20G 0 lvm /mnt/R/var/lib/vz `-soleil-pooldata_tdata 254:3 0 80G 0 lvm `-soleil-pooldata-tpool 254:4 0 80G 0 lvm |-soleil-pooldata 254:5 0 80G 0 lvm `-soleil-vz 254:6 0 20G 0 lvm /mnt/R/var/lib/vz sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

    Now we can restore using dar the same as we did above without LVM. The VM backup will go into the thin-volume and the rest of the proxmox system will be preserved in its logical volume from the activity of the VM and their backups, while the content of the EFI partition will be also restored.

    [root@sysresccd /mnt/R]# cd /mnt/Backup [root@sysresccd /mnt/Backup]# ./dar_static -x soleil-full-2020-09-16/soleil-full-2020-09-16 -R /mnt/R -X "lost+found" -w [...]

    Once dar has completed, you will have to adapt /mnt/R/etc/fstab for both UUID if they were used, and /dev/sdX that may become /dev/mapper/<vgname>-<volume-name>, if moving several partitions to LVM volumes or changing VG and LVM names. Here as we split the content of /var/lib/vz to a dedicated thin-volume, we will have to add a new line in fstab for this volume to be mounted at system startup time:

    [root@sysresccd /mnt/R]# echo "/dev/mapper/soleil-vz /var/lib/vz ext4 default 0 2" >> /mnt/R/etc/fstab

    The end of the process is the same as above, by chrooting and reinstalling grub.

    Proxmox Specific

    As we did not saved nor restored the block devices of VM (the thin-pool) but just have their backup restored in /var/lib/vz/dump we need to remove the VM referred in the proxmox database (which do not exist anymore) and restore them from their backups

    root@soleil:~# for vm in `qm list | sed -rn 's/\s+([0-9]+).*/\1/p'` ; do qm set $vm --protect no ; qm destroy $vm ; done ... root@soleil:~# qm list root@soleil:~#

    Now from the proxmox GUI you can restore all the VM and containers from their Backups. If not using LVM but Ceph or other shared and distributed file-system, this task vanishes as the block storage of VM is still present in the distributed storage cluster. How now to add the local storage to this Ceph cluster is out of the scope of this document.

    Restoring a LUKS ciphered disk

    When restoring with dar, you may take the opportunity to restore to a ciphered disk, even if the original system was not. You may also have backed up a ciphered system so we end to the same point we will have to restore the system into a ciphered disk.

    For simplicity we will restore an LVM inside a ciphered system, but the exercice is pretty similar to restore an LVM and have some Logical Volume being LUKS ciphered "devices". The advantage of LVM inside LUKS is simplicity, the advantage of LUKS inside LVM is performance when you do not want to have all volumes ciphered (for example a /var/spool/proxy which holds public data, the content of a public ftp server, and so on, do not worth ciphering).

    As seen previously, the EFI partition cannot be part of an LVM, it cannot be neither ciphered as to read a ciphered volume, the kernel must be loaded and running. The second consequence is that the kernel and the mandatory initramfs must not reside in a ciphered partition. If LUKS can prevent your data from be exposed to a thief, however if someone has a physical access to your computer and if this later one is not running 24/7, LUKS alone cannot prevent one to modify the kernel and ramdisk image used to boot, introducing some keylogger or other spying tool that will catch the secret key you need to enter at boot time in order to uncipher your LUKS disk. This is the role of the secure boot process, which we will not describe here today (maybe in a future revision of this document) to detect and prevent such type of attack.

    So we have to create an EFI partition, an unciphered boot partition and an partition that will be ciphered and which will contain the LVM (root, home, swap space for example). With the same commands we used above, here is what partitionning we should get:

    root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): p Disk /dev/sda: 67108864 sectors, 32.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): 23033D82-7166-4282-AEF9-F2CC18453F1C Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 67108830 Partitions will be aligned on 2048-sector boundaries Total free space is 4029 sectors (2.0 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 1050623 512.0 MiB EF00 EFI Partition 2 1050624 1550335 244.0 MiB 8300 Linux Boot 3 1550336 67106815 31.3 GiB 8300 LUKS Device Command (? for help): q [root@sysresccd ~]#

    We will format the EFI partition the same way we did above, format the Linux boot with an ext4 filesystem as we also did above. What is new here is the LUKS Device we have first to initialize as a LUKS volume. The volume contains some metadata (ciphered keys, token,...) that have to be created first (and only once):

    [root@sysresccd ~]# cryptsetup luksFormat /dev/sda3 WARNING! ======== This will overwrite data on /dev/sda3 irrevocably. Are you sure? (Type uppercase yes): YES Enter passphrase for /dev/sda3: Verify passphrase: [root@sysresccd ~]#

    of course if you forget the provided passphrase, you will lose all data stored in that volume. Note that this key can be changed without having to rebuild or recipher the whole volume, we will see that further. Now we can open the volume, which mean have the Linux kernel aware of the master key and able to cipher/uncipher data written to or read from this device:

    [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# cryptsetup open /dev/sda3 crypted_part Enter passphrase for /dev/sda3: [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part `-crypted_part 254:0 0 32G 0 crypt sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

    The rest is straight forward, we have now a /dev/mapper/crypted_part we can use as Physical Volume for LVM

    [root@sysresccd ~]# pvcreate /dev/mapper/crypted_part Physical volume "/dev/mapper/crypted_part" successfully created. [root@sysresccd ~]# vgcreate vgname /dev/mapper/crypted_part Volume group "vgname" successfully created [root@sysresccd ~]# lvcreate -n root -L 10G vgname Logical volume "root" created. [root@sysresccd ~]# lvcreate -n home -L 8G vgname Logical volume "home" created. [root@sysresccd ~]# lvcreate -n swap -L 1G vgname Logical volume "swap" created. [root@sysresccd ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home vgname -wi-a----- 8.00g root vgname -wi-a----- 10.00g swap vgname -wi-a----- 1.00g [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part `-crypted_part 254:0 0 32G 0 crypt |-vgname-root 254:1 0 10G 0 lvm |-vgname-home 254:2 0 8G 0 lvm `-vgname-swap 254:3 0 1G 0 lvm sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

    The following steps are almost identical to what we did earlier:

    • format the LVM with the filesystem of your choice, and use mkswap for the swap volume
    • mount /dev/mapper/vgname-root to /mnt/R directory
    • mount /dev/sda2 to /mnt/R/boot
    • mount /dev/sda1 to /mnt/R/boot/efi
    • mount /dev/mapper/vgname-home to /mnt/R/home
    • restore the data with dar -R /mnt/R .... as we did above
    • edit /mnt/R/etc/fstab and /mnt/R/etc/initramfs-tools/conf.d/resume with the UUID of the filesystem we created (get them using blkid)
    • and edit or create the /mnt/R/etc/crypttab file, we will zoom on that now:

    In a linux system, the /etc/crypttab is read at startup (from the initramfs) to know which volume should be "open" (cryptsetup open as we did above manually). This will lead the system to ask the passphrase to access the ciphered volume.

    The /etc/crypttab is structured per line and each one contains 4 fields separated by space:

    • the name we will use for the unciphered volume (above we used crypted_part)
    • the UUID of the LUKS volume (here the UUID of /dev/sda3, retrievable using blkid)
    • the passphrase. For the root device we must use "none" for you get prompted for it
    • some flags, we will use "luks,discard" here (see crypttab man page for more).
    [root@sysresccd /mnt/R/etc]# echo "crypted_part UUID=4d76e357-f136-4f7e-addc-030436f37682 none luks,discard" > /tmp/R/etc/crypttab [root@sysresccd /mnt/R/etc]#

    the rest is exactly the same as we did:

    • resinstall grub using update-grub and grub_install
    • regenerate the initramfs using update-initramfs -u

    Last, before rebooting you may want to close all that properly, there is a pitfall about LVM on LUKS you have to be aware. To close the LUKS volume the LVM must be disabled inside it, else as the LUKS is busy by LVM you won't be able to close it:

    root@sysresccd:~# exit exiting the chroot environment exit [root@sysresccd /mnt/R]# [root@sysresccd /mnt/R]# umount /mnt/R/boot/efi [root@sysresccd /mnt/R]# umount /mnt/R/boot [root@sysresccd /mnt/R]# umount /mnt/R/home [root@sysresccd /mnt/R]# swapoff /dev/mapper/vgname-swap if we activated this swap volume [root@sysresccd /mnt/R]# umount /mnt/R/dev [root@sysresccd /mnt/R]# umount /mnt/R/proc [root@sysresccd /mnt/R]# umount /mnt/R/sys/firmware/efi/efivars [root@sysresccd /mnt/R]# umount /mnt/R/sys [root@sysresccd /mnt/R]# cd / [root@sysresccd /]# umount /mnt/R [root@sysresccd ~]# cryptsetup close crypted_part Device sda3_crypt is still in use. LVM still uses the crypted_part volume [root@sysresccd ~]# vgchange -a n vgname 0 logical volume(s) in volume group "vgname" now active [root@sysresccd ~]# cryptsetup close crypted_part [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# shutdown -r now
    dar-2.7.17/doc/dar_s_doc.jpg0000644000175000017520000000706514041360213012467 00000000000000ÿØÿàJFIFHHÿÛC  !"$"$ÿÛCÿÂx "ÿÄÿÄÿÚ úPÍ:NOTÈgξ…çaÄ÷<;‡ªšM¯dÀ”M]›X_ÎyŸ¨,çç<®ï'¤£a]ÈË—Ö¸w¼×¹ò~†Ï+žz¸Qóøïï-ùÎï'e­;*åÕÏóžÎŸ‡çncdvnéÂ~oº»Nʽ>o¯£zß5C‡éwå×NŸ^8os}Xç<ûò’!JojUûº˜Bí=˜÷Øc<~Î1$×¶sº8:5+O\/m¡œö½]šÌó¯*Qͧ²Ú¤ìã?FÙ+®G?ª5nMjJÊÙé‚*JÊøWXEêÆÌ6àÙ-;¹}€‹ÿÄ'!"@ #$230ÿÚ÷g¼†#ü.GGˆÌb˜³d…á2™-ý&»AôÛÕßðàíúƒ0Û’#½·ym¸\Æ+’0Ëa[Ž`UÄ"H"‰Dl#ƒyB•͞ﲌòlY®èâ8›™Jfáþ‘‰ðî*͚Ωõg´ˆI‰®È£îeU;Ó“ Bв"|Œ7Qê0ôaG,5X‹³¦ún û;¨80ß›¦Û©!‚g±uE¤[´•=-D³íêâÏÕ<»ÝÅ=F .¿]à—?§ǔ쨖 Ÿ±"?[3Ô¸2ëlÅÉU?!|T5|{ŸÿÄ+!1 2@Q"0Aaq‘B¡±ÿÚ?ëiÈ«®‰´[43ÕtQp‡ÉOg††ºh6`§FËׅ֍Тˆ4F.-â9L+)v~E ­7ËiëèCiÝ?C"» ám¦^bÚO*2Ì“|N^¾›µÔqRåÈÇ-d~(òÃý½-ÏäÌ»E@ñíoÙb©pÑ–ˆöoCÕ¼+?’£©oe£Â’š•D’›Ü¥©<ËR*nýø¹Lû¹L™âh´è¡Çº´1¢/v\Zn͚ўÒGwÀ™fHq’k=úeÅ‘Vì|Þ.Å[¿s™”;Ž‹ì§¡‘Ÿ®‚ŠWÜÍîËɳ4\oìx]>åâʇuE¸bðßèü•û3С¸—ÑZY’kùjaÄßÏQÿÄ'!1AQa @qÁ‘0¡±ÑáÿÚ?!ù‹E±@±?¼D@³_s,º ±?O0Þ½½ ŒAè«–^çàРn}÷)[¨râkj¿!Ÿéo,,ôº XÌJàÁtÅCŠŠÄr¸jµ‰ˆ¯ÌîÕ&°[uUm–†„ÐdN²O¹…ŒA‹Ë 5ŽÇ‡øN šŸGTjø*‰ÌL°2Ì?áB¬=«EÎúbØ*#÷›–’º–·%>c&ɧîP_Lù˜h ¨ÙJ¯˜ûb£$XÇ·ýȪ.9Õظî´?p6égh%¬#X¾ ‰HÝb*îq\6T½©o#bÅŽ ) +ÀÃå ëÖÁ—v\·u”h/F(£Åj4á?rÏÄÿYЉÈf5=’'ê ôE¨ì• :ôAÙ~šF€‹˜yÕ¾q ì·Ñ,¦`¶»cz9"PuLd«dªy…’£Õ|Ã0¦¥þ¬¨·‚4õ£¹†Û-¬'5ù1Œ>y©fš® FB¡ô+MDÁÌ*Š­QwnèÆÂ‹ˆ´D Vzx%{i ¾áî{ó*Ññj‰ž«utk^{‚±ÓP•»¬r¨™‹¨Wg˜E/ÈÃtæQöøwè ¢ÛǪáÒÞ_#ÿÚ óÏ<Ão<óÏ<óÃsóÏ<ó¾}¼°[ÿÎßâ¼Å§<áe÷þç ŸÂ!+j¹óÏ>ùÜqûÏ<óÏ<óÏ<ÿÄ&!1AQaqÁ0‘¡±Ñá ÿÚ?øÂÒ"oýkCË |2Ëâf0Ù¥FûCB‡¨€ÇÑ@¶Rxþî]>r™g»¼«1~ýÄRð¯Éú™´ª9¨ ©Î\û} ãÑ9ý¼b>ÀÏê0.Z¡3º›ƒëWÄbŸ•…o5ÒXlšfÝz@Ïc±|ƒ­kk‰o<;ã‰Zñ­×Žyí7kUXï"¨5£„<Ç×¥Òˆ¬ Ì¯CÙ¿¶e‡[ÃÇ̘ò3ë¦ 6¤š‚éz¸êÜÆtî¾ÐEw‰¿x̘íì!ÂZbVŸQq‘4ün¾’­-ü P»†™†‡ÉH¿Fb!(½ÝK)ì -w¨½± rÛg´b{¼øY4üÈ W°‹Y¢¢„}™HP‰uwZ‹"aF°”‘e ÆEtûCI»al+§4ÌoTºú <0 ÅCØ4a¬:rEJŒ$¥ðc_QÿÙdar-2.7.17/doc/Limitations.html0000644000175000017520000003024114403564520013233 00000000000000 DAR's Limitations
    DAR's Documentation

    DAR's Limitations

    Here follows a description of the known limitation you should consult before creating a bug report for dar:

    Fixed Limits

    • The size of SLICES may be limited by the file system or kernel (maximum file size is 2 GB with Linux kernel 2.2.x), other limits may exist depending on the filesystem used.
    • the number of SLICES is only limited by the size of the filenames, thus using a basename of 10 chars, considering your file system can support 256 char per filename at most, you could already get up to 10^241 SLICES, 1 followed by 241 zero. But as soon as your file system will support bigger files or longer filename, dar will follow without change.
    • dar_manager can gather up to 65534 different backups, not more. This limit should be high enough to not be a problem.
    • when using a listing file to define which file to operate on (-[ and -] option when using dar), each line of the listing file must not be longer than 20479 bytes else a new line is considered at the 20480th byte.

    System variable limits

    Memory

    Dar uses virtual memory (= RAM+swap) to be able to add the list of file saved at the end of each archive. Dar uses its own integer type (called "infinint") that do not have limit (unlike 32 bits or 64 bits integers). This makes dar already able to manage Zettabytes volumes and above even if the systems cannot yet manage such file sizes. Nevertheless, this has an overhead with memory and  CPU usage, added to the C++ overhead for the datastructure. All together dar needs a average of 650 bytes of virtual memory by saved file with dar-2.1.0 and around 850 with dar-2.4.x (that's the price to pay for new features). Thus, for example if you have 110,000 files to save, whatever is the total amount of data to save, dar will require around 67 MB of virtual memory.

    Now, when doing catalogue extraction or differential backup, dar has in memory two catalogues, thus the amount of memory space needed is the double (134 MB in the example). Why ? Because for differential backup, dar starts with the catalogue of the archive of reference which is needed to know which files to save and which not to save, and in another hand, builds the catalogue of the new archive all along the process. Now, for catalogue extraction, the process is equivalent to making a differential backup just after a full backup.

    As you guess merging two archives into a third one requires even more memory (memory to store the first archive to merge, the second archive to merge and the resulting archive to produce).

    This memory issue, is not a limit by itself, but you need enough virtual memory to be able to save your data (if necessary you can still add swap space, as partition or as a plain file).

    Integers

    To overcome the previously explained memory issue, dar can be built in an other mode. In this other mode, "infinint" is replaced by 32 bits or 64 bits integers, as defined by the use of --enable-mode=32 or --enable-mode=64 options given to configure script. The executables built this way (dar, dar_xform, dar_slave and dar_manager) run faster and use much less memory than the "full" versions using "infinint". But yes, there are drawbacks: slice size, file size, dates, number of files to backup, total archive size (sum of all slices), etc, are bounded by the maximum value of the used integer, which is 4,294,967,296 for 32 bits and 18,446,744,073,709,551,616 for 64 bits integers. In clear the 32 bits version cannot handle dates after year 2106 and file sizes over 4 GB. While the 64 bits version cannot handle dates after around 500 billion years (which is longer than the estimated age of the Universe: 15 billion years) and file larger than around 18 EB (18 exa bytes).

    Since version 2.5.4 another parameter depends on the maximum supported integer number: the number of entries in a given archive. In other words, you will not be able to have more than 4 giga files (4 billion files) in a single archive when using libdar32, 18 exa file with libdar64 and no limitation with libdar based on infinint.

    What the comportment when such a limit is reached ? For compatibility with the rest of the code, limited length integers (32 or 64 for now) cannot be used as-is, they are enclosed in a C++ class, which will report overflow in arithmetic operations. Archives generated with all the different version of dar will stay compatible between them, but the 32 bits or 64 bits will not be able to read or produce all possible archives. In that case, dar suite program will abort with an error message asking you to use the "full" version of dar program.

    Command line

    On several systems, command-line long options are not available. This is due to the fact that dar relies on GNU getopt. Systems like FreeBSD do not have by default GNU getopt, instead the getopt function proposed from the standard library does not support long options, nor optional arguments. On such system you will have to use short options only, and to overcome the lack of optional argument you need to explicitly set the argument. For example in place of "-z" use "-z 9" and so on (see dar's man page section "EXPLICIT OPTIONAL ARGUMENTS"). All options for dar's features are available with FreeBSD's getopt, just using short options and explicit arguments.

    Else you can install GNU getopt as a separated library called libgnugetopt. If the include file <getopt.h> is also available, the configure script will detect it and use this library. This way you can have long options on FreeBSD for example.

    Another point concerns the comand line length limitation. All system (correct me if I am wrong) do limit the size of the command line. If you want to add more options to dar than your system can afford, you can use the -B option instead an put all dar's arguments (or just some of them) in the file pointed to by this -B option. -B can be used several times on command line and is recursive (you can use -B from a file read by -B option).

    Dates

    Unix files have up to four dates:

    • last modification date (mtime)
    • last access date (atime)
    • last inode change (ctime)
    • creation date (birthtime) [not all Unix system support this date, Linux does not for example]

    In dar, these dates are stored as integers (the number of seconds elapsed since Jan 1st, 1970) as Unix systems do, since release 2.5.0 it can also save, store and restore the microsecond part of these dates on system that support it. As seen above, the limitation is not due to dar but on the integer used, so if you use infinint, you should be able to store any date as far in the future as you want. Of course dar cannot stores dates before Jan the 1st of 1970, but it should not be a very big problem as there should not be surviving files older than that epoch ;-)

    There is no standard way under Unix to change the ctime. So Dar is not able to restore the ctime date of files.

    Symlinks

    On system that provide the lutimes() system call, the dates of a symlink can be restored. On systems that do not provide that system call, if you modify the mtime of an existing symlink, you end modifying the mtime of the file targeted by that symlink, keeping untouched the mtime of the symlink itself! For that reason, without lutimes() support, dar is avoids restoring mtime of symlink.

    SFTP protocol

    dar/libdar cannot read an .ssh/known_hosts containing "ecdsa-sha2-nistp256" lines, even if the host you are connected to known by an "ssh-rsa" key, as soon as one host in the known_hosts file is not "ssh-rsa" dar/libdar will fail the host validation and abort. This is restriction comes from libssh that seems used by libcurl which libdar relies on for network related features. There is some workaround decribed in the man page about the DAR_SFTP_KNOWNHOSTS_FILE environment variable.

    As of January 2021, libssh 1.9.0 seems to have fixed this issue, so the problem should be fixed shortly once libcurl will have integrated this new feature.

    Multi-threading performance and memory requirement

    Don't expect the execution time to be exactly divided by the number of threads you will use, when comparing with the equivalent single threaded execution. There is first an overhead to have several threads working in the same process (protocols, synchronization on shared memory). There is also the nature of the work that is not always possible to process with threads.

    With release 2.7.0 the two most CPU intensive tasks involved in the building and use of a dar backup (compression and encryption) have been enhanced to support multiple threads. Still remains some light tasks (CRC calculation, escape sequence/tape marks, sparse file detection/reconstruction), that are difficult to parallelize while they require much less CPU than these first two tasks, so the multi-threading investement (processing, but also software developement) does not worth for them

    Third, the ciphering/deciphering process may be interrupted when seeking in the archive for a particular file, this slightly reduces performances of multi-threading. For compression even when multi-threading is used, the compression stays per file (to bring robustness), and is thus reset after each new file. The gain is very interesting for large files and you have better not to select a too small block size for compression, like having several tens of kilobytes or even a megabyte. Large compression block (which are used scattered between different threads) is interesting as small files will be handeled by a single thread with little overhead and large files will leverage multiple threads. Last, your disk access may become the bottleneck that drives the total execution time of the operation, adding more thread will not provide any improvement in that condition.

    The drawback of the multi-threading used within libbdar is the memory requirement. Each worker thread will work with a block in memory (in fact two: one for compressed/ciphered data, the other for their decompressed/deciphered counterpart). In addition to the worker threads you specify, two helper threads are needed, one to dispatch the data blocks to the workers threads, the other to gather resulting blocks. These two are not CPU intensive but may hold memory blocks for treatment while the worker threads do also.

    dar-2.7.17/doc/Tutorial.html0000644000175000017520000012673514740173721012563 00000000000000 DAR - Tutorial
    Dar Documentation

    Tutorial

    Introduction

    This tutorial shows you how to backup your file system (partially or totally) on USB key (thing works the same with harddisks or cloud storage), but we will keep USB keys for simplicity. Most important, we will also see how to restore your system from scratch in case of hard disk failure (or other cataclysms).

    Note:
    This document has been initially written circa 2003, so don't pay attention to the usage of old hardware it mentions, the dar usage stay the same with modern removable media or cloud storage, and the document has been updated with recent features as if those old stuffs were still of actuality :-)

    In the following, for each feature we will use, you will find the description of what it does followed by the way to activate it both using its the long options and its the short option. Of course, that's up to you to use either the short or the long opton (but not both at the same time for a particular feature). Short option begin by a single dash (-) and have only a single letter to identify them like -s. Long option begins with two dashes (--) and usually have a descriptive word to identify them: --slice.

    Short and long option may have no argument (-D), may have a mandatory argument which is the word following the option (-s 1M) and some rare ones may have an optional argument, leading the option to either be alone -z or sticked with its optional argument -zlz4, which for long option is done by mean of the equal signe (=): --compression=lz4

    The FULL backup

    We need first to make a full backup, let's go:

    • Let's assume the size of the usb keys is 100 MiB, we ask dar to split the backup in many files (also known as slices) of 100 MiB: --slice 100M or -s 100M.

    • On your first usb key drive we want to copy the dar binary outside the backup to be able to restore it in case of hard disk failure, for example.

      IMPORTANT:
      dar binary relies on several libraries which must also be available in the rescue system or copied with the dar binary. But, if you don't want to worry about needed libraries, there is a static version of dar which only difference is that it has all required library included in it (thus it is a larger binary). Its name is dar_static, and its main reason of existence is to be placed beside backups in case something goes wrong in your system. Note that dar_static is useless for windows, you will always need the Cygwin dll.

      You can also add man pages or a copy of this tutorial, if you are scared not to be able to remember all the many feature of dar ;-) while find the -h or --help option too sparse. Note that all the dar documentation is available on the web. OK you need an Internet access to read it.

      This make the free space on the first usb key a bit smaller, I let you make the substraction because this is subject to change from system to system, but let's assume dar_static is less than 5 MiB, thus the initial slice should not exceed 95 MB: --first-slice 95M or -S 95M. (Note that '-s' is lowercase for all the slices, and '-S' is UPPERCASE meaning the initial slice only).

    • We need to pause between slices to change the usb key when it is full: --pause or -p

    • As we don't want to stick in front of the screen during the backup, we ask dar to to ring the terminal bell when user action is needed: --beep or -b

    • We will compress data inside the backup: --compression or -z.

      by default -z option uses gzip compression algorithm (gzip, bzip2, lzo, xz, lz4, zstd, and some others are also available). Optionally, if speed is more important than archive size, you can degrade compression specifying the compression level: -z1 for example for gzip, or -zxz:5 for compression level 5 with xz algorithm. By default the maximum compression is used (-z is equivalent to -zgzip:9)

    • Now, we want to backup the whole file system. --fs-root / or -R /

      This tells dar that no files out of the provided directory tree will be saved. Here, it means that no files will be excluded from the backup, if no filter is specified, see below)

    • There are some files you probably don't want to save, like backup files generated by emacs "*~" and ".*~": --exclude "*~" --exclude ".~*" or -X "*~" -X ".*~"

      Note that you have to quote the mask for it not to be interpreted by the shell, the -X options do not apply to directories, nor to path, they just apply to filenames. See also the opposite -I option (--include) in man page for more information.

    • Among these files are several sub-trees you must not save: the /proc file system for example, as well as the /dev and /sys. These are virtual filesystems, saving them would only make your backup bigger filled with useless stuff: --prune dev --prune proc --prune sys or -P dev -P proc -P sys

      Note that path must be relative to -R option (thus no leading '/' must be used) Unlike the -X/-I options, the -P option applies to full file path+names. If a directory matches -P option, all its subdirectory will also be excluded. note also that -P can receive wildcards, and they must be quoted not to be interpreted by the shell: -P "home/*/.mozilla/cache" for example. Lastly, -P can also be used to exclude a plain file (if you don't want to exclude all files of a given name using -X option): -P home/joe/.bashrc for example would only exclude joe's .bashrc file not any other file, while -X .bashrc would exclude any file of that name including joe's file. See also -g, -[ and -] options in man page for more, as well as the "file selection in brief" paragraph

    • More importantly we must not save the backup itself: --prune mnt/usr or -P mnt/usb

      assuming that your usb key is mounted under /mnt/usb. We could also have excluded all files of extension "dar" which are backup generated by dar using -X "*.*.dar", but this would have also exclude other dar archive from the backup, which may not always fit your need.

    • Now, as we previously excluded the /dev/pts /proc and /mnt/usb directories, we would have to create these directory mount-points by hand at recovery time to be able to mount the corresponding filesystems. But we can better use the -D option: it changes dar's behavior by not totally ignoring excluded directories (whatever is the feature used to exclude them) but rather storing them as empty directory in the backup: --empty-dir or -D

      Thus at recovery time excluded directories will be generated automatically as an empty directories

    • Last, we have to give a name to this full backup. Let's call it "linux_full" and as it is supposed to take place on the usb key, its path will be /mnt/usb/linux_full: --create /mnt/usb/linux_full or -c /mnt/usb/linux_full

      Note that linux_full is not a complete filename, it is a "basename", on which dar will add a number and the ".dar" extension, this way the first slice will be a file of name linux_full.1.dar located in /mnt/usb

    Now, as we will have to mount and umount the /mnt/usb file system, we must not have any process using it, in particular, dar current directory must no be /mnt/usb so we change to / for example.

    All together we follow this procedure for our example:

    • Plug an empty usb key and mount it according to your /etc/fstab file.

      mount /mnt/usb
    • Copy the dar binary to the first usb key (to be able to restore in case of big problem, like a hard disk failure) and eventually man pages and/or this tutorial.

      cp `which dar_static` /mnt/usb
    • then, type the following:

      cd / dar -c /mnt/usb/linux_full -s 100M -S 95M -p -b -z -R / -X "*~" -X ".*~" -P dev/pts -P sys -P proc -P mnt/usb -D

      Note that option order has no importance. Some options may be used several times (-X, -I, -P) some others cannot (see man page for more).

    • When the first slice will be done, DAR will pause, ring the terminal bell and display a message. You will have to unmount the usb key:

      umount /mnt/usb
    • and replace it by an empty new one and mount it:

      mount /mnt/usb

    To be able to do that, you can swap to another virtual console pressing ALT+F? keys (if under Linux), or open another xterm if under X-Windows, or suspend dar by typing CTRL-Z and reactivating it after mounting/unmounting by typing `fg' (without the quotes).

    Then proceed with dar for the next slice, pressing the <enter> key. Dar will label slices this way:

    • slice 1: linux_full.1.dar
    • slice 2: linux_full.2.dar
    • and so on.

    That's it! We have finished the first step: the backup, it may take a long time depending on the size of the data to backup. The following step (differential backup) however can be done often, and it will stay fast every time (OK, except if a big part of your system has changed, in that case you can consider making another full backup).

    Test your Backups!

    There is so many reason a backup can be useless, it may be human error, saturated disk, lack of permission, and so on. The best test is to restore the data at least once. But there are some more quick way (though less exhaustive) to test a backup:

    Check the backup content

    This one is usually quick, you know the backup is readable but have to verify that all expected files are present in the output:

    dar -l /mnt/usb/linux_full

    Testing the backup

    One step further you can let dar try to restore everything without effectively restoring anything, (this mimics the cat > /dev/null paradigm). Doing so you validate the data and metadata of all files is not corrupted. This is usually a good thing to add in your backup script (or more generally your backup process):

    dar -t /mnt/usb/linux_full

    If using removable media of poor quality, it is recommended to first unmount and remount removable disk, this to flush the system cache. Else you may read data from cache (in memory) and do not detect an error on you disk. dar -t cannot check a single slice, it checks all the archive. If you need to check a single slice, (for example after burning it on DVD-RW, you can use the diff command: for example, you have burnt the last completed slices on DVD-RW, but have just enough free space to store one slice on disk. You can thus check the slice typing something like this:

    diff /mnt/cdrom/linux_full.132.dar /tmp/linux_full.132.dar

    You can also add the --hash command when you create the backup (for example --hash md5), it will produce for each slice a small hash file named after the slice name "linux_full.1.dar.md5", "linux_full.2.dar.md5", etc. Then using the unix standard command "md5sum" you can check the integrity of the slice:

    md5sum -c linux_full.1.dar.md5

    If all is ok for the slice on target medium (diff does not complain or md5sum returns "OK"), you can continue for dar to proceed with the next slice.

    Compare the backup content with filesystem

    instead of testing the whole archive you could also compare it with the just saved system:

    dar -d /mnt/usb key/linux_full -R /

    This will compare the archive with filesystem tree located at / . Same remark as previously, it is recommended to first unmount and mount the removable media to flush the system cache.

    If you backup a live filesystem, you may prefer 'testing' an archive as it will not issue errors about files that changed since the backup was made, but if you are archiving files, diffing is probably a better idea as you really compare the content of the files and you should not experiment file changes on data you are archiving as most of the time such data about to be archived is old steady data that is not likely to change.

    Differential backups

    The only thing to add is the base name of the backup we take as reference: --ref /mnt/usb/linux_full or -A /mnt/usb/linux_full

    Of course, we have to choose another name for that new backup, let's call it linux_diff1: --create /mnt/usb/linux_diff1 or -c /mnt/usb/linux_diff1

    Last point: if you want to put the new backup at the end of the full backup, you will have to change the -S option according to the remaining space on the last usb key. suppose the last slice of linux_full takes 34MB you have 76MB available for the first slice of the differential backup (and always 100MB for the following ones): --first-slice 76M or -S 76M

    but if you want to put the backup on a new usb key, just forget the -S option.

    here we also want to produce a hash file to test each slice integrity before removing it from hard disk (md5, sha1, sh512 are the available hash algorithm today): --hash md5 or -3 md5

    All together we get:

    dar -c /mnt/usb/linux_diff1 -A /mnt/usb key/linux_full -s 100M -S 76M -p -b -z -R / -X "*~" -X ".*~" -P dev/pts -P proc -P mnt/usb key -P sys -D --hash md5

    The only new point is that, just before effectively starting to backup, dar will ask for the last slice of the archive of reference (linux_full), then dar will pause (thanks to the -p option) for you to change the disk if necessary and put the one where you want to write the new backup's first slice, then pause again for you to change the disk for the second slice and so on.

    Endless Differential Backups

    You can make another differential backup, taking linux_diff1 as reference (which is called an incremental backup, while a differential backup has always the a full backup as reference). In this case you would change only the following: -c /mnt/usb/linux_diff2 -A /mnt/usb key/linux_diff1

    You could also decide to change of device, taking 4,4 GiB DVD-RAM... or maybe rather something more recent and bigger if you want, this would not cause any problem at all.

    After some time when you get many incremental backups for a single full backup, you will have to make a new full backup, depending on your available time for doing it, or on your patient if one day you have to recover the whole data after a disk crash: You would then have to restore the full backup, then all the following incremental backup up to the most recent one. This requires more user intervention than restoring a single full backup, all is a matter of balance, between the time it takes to backup and the time it takes to restore.

    Note, that starting with release 1.2.0 a new command appeared that helps restoring a few files from a lot a differential backup. Its name is dar_manager. See at the end of this tutorial and the associated man page for more.

    Another solution, is when you have too much incremental backup, is to make the next backup a differential backup taking the last full_backup as reference, instead of the last differential backup done. This way, it will take less time than doing a full backup, and you will not have to restore all intermediate differential backup.

    For dar, there is not difference in structure between a differential backup (having a full backup as reference) and an incremental backup (having a differential or another incremental backup as reference). This is just the way you chose the backup of reference that let you use two different words naming differently what dar considers of the the kind.

    Of course, a given backup can be used as reference for several differential backup, there is no limitation in number nor in nature (the reference can be a full of differential backup).

    Yet another solution is to setup decremental backups, this is let you have the full backup as the most recent one and the older ones as difference from the backup done just after them... but nothing is perfect, doing so takes much more time than doing full backup at each step but as less storage space as doing incremental backups and restoration time is as simple as restoring a full backup. here too all is a matter of choice, taste and use case.

    Recovering after a disk crash

    Sorry, it arrived, your old disk has crashed. OK, you are happy because you have now a good argument to buy the very fast and very enormous very lastest hard disk available. Usually, you also cry because you have lost data and you will have to reinstall all your system, that was working  so well and for so long!

    If however the last backup you made is recent, then keep smiling! OK, you have installed your new hard disk and configured you BIOS to it (well at ancient time it was necessary to manually setup the BIOS with the new disk, today you can forget it).

    1. You first need to boot your new computer with the empty disk in order to restore your data onto it. For that I would advise using Knoppix or better system rescue CD that let you boot from CD or USB key. You don't need to install something on your brand-new disk, just make partitions and format them as you want (we will detail that below). You may even change the partition layout add new ones or merge several ones into a single one: what is important is that you setup each one with enough space to hold the data to be restored in them: We suppose your new disk is /dev/sda and /dev/sdb is your removable media drive (USB key, DVD device, ...) For clarity, in the following we will keep assuming it to be a set of USB keys, it could be CD, DVD, or other disk you would do slightly the same.

    2. Create the partition table as you wish, using fdisk /dev/sda or gdisk /dev/sda for a more versatil and modern partition table.

    3. Format the partition which will receive your data, dar is filesystem independent, you can use ext2 (as here in the example), ext3, ext4, ReiserFS, Minix, UFS, HFS Plus, XFS, whatever is the Unix-like filesystem you want, even if the backed up data did not reside on such filesystem at backup time! mke2fs /dev/sda1

    4. copy and record in a temporary file the UUID of the generated filesystem if the /etc/fstab we will restore in the next steps rely in that instead of fixed path (like /dev/sda1 or /dev/mapper/...). You can also retrieve the UUID calling blkid

    5. Additionally if you have created it, format the swap partition and also record the generated UUID if necessary: mkswap -c /dev/sda2

    6. If you have a lot of file to restore, you can activate the swap on the partition of your new hard drive: swapon /dev/sda2

    7. Now we must mount the hard disk

      cd / mkdir disk mount -t ext2 /dev/hda1 /disk
    8. As an alternative, If you want to restore your system over several partitions like /usr /var /home and / , you must create the partitions, format them as seen above and then create the directories that will be used as mounting point an mount the partitions on these directories. For example if you have / , /usr , /var and /home partitions this would look like this:

      mkdir /disk/usr /disk/var /disk/home mount /dev/sda2 /disk/usr mount /dev/sda3 /disk/var mount /dev/sda4 /disk/home
    9. If the boot system used does not already include dar/libdar (unlike system rescue CD and Knoppix for example) we need to copy the dar binary from a removable medium to your disk: insert the USB key  containing the dar_static binary to be able to freely change of key later on:

      cd / mkdir /usb_key mount /dev/sdb /usb_key cp /usb_key/dar_static /disk

      where /dev/sdb points to your usb_key drive (run "dmesg" just after plugging the key to know which device to use in place of the fancy /dev/sdb). We will remove dar_static from your new hard drive at the end of restoration.

    10. All the restored data has to go in /disk subdirectory: -R /disk

    11. The process may be long, thus it might be useful to be noticed when a user action is required by dar: -b note that -p option is not required here because if a slice is missing dar will pause and ask you its number (If slice "0" is requested by dar, it means the "last" slice of the backup is requested).

    12. OK, now we have seen all the options, let's go restoring!

      /disk/dar_static -x /usb_key/linux_full -R /disk -b
    13. ...and when the next USB key is needed:

      umount /usb_key

      ...then unplug the key, plug the next one and mount it:

      mount /dev/sdb /usb_key

      As previously, to do that either use an second xterm virtual console or suspend dar by CTRL-Z and awake it back by the 'fg' command. Then press <enter> to proceed with dar

    14. Once finished with the restoration of linux_full, we have to do the same with any following differential/incremental backup. However, doing so will warn you any time dar restores a more recent file (file overwriting) or any time a file that has been removed since the backup of reference, has to be removed from file system (suppression). If you don't want to press the <enter> key several thousand times: -w option (don't warn). All file will be overwritten without warning, and this is not an issue as be restore more recent data over older one.

    15. All together for each potential differential backups, we have to call:

      /disk/dar_static -x /usb_key/linux_diff1 -R /disk -b -w /disk/dar_static -x /usb_key/linux_diff2 -R /disk -b -w /disk/dar_static -x /usb_key/linux...... -R /disk -b -w
    16. Finally, remove the dar binary from the disk:

      rm /disk/dar_static
    17. and we have to modify the /etc/fstab with the new UUID you have recorded (use the blkid command to get them listed and modify /etc/fstab if necessary)

    18. Last, reinstall you original boot loader from the restored data:

      If you still use lilo type: lilo -r /disk

      If your boot loader is grub/grub2 type:

      update-initramfs -u update-grub grub-install /dev/sda
    19. You can reboot you machine and be happy with you brand-new hard disk with your old precious data on it:

      shutdown -r now

    In this operation dar in particular restored sparse files and hard linked inodes, thus you will have no drawback and even possibly better space usage than the original filesystem as dar can even transparently convert big plain files into smaller sparse files without any impact

    The Flexibly Restoring a whole system with dar document goes one step further in this direction by illustrating many use cases like, the use of LVM, LUKS encrypted filesystems, even the full restoration of a Proxmox Virtual Environment system with all its Virtual Machines

    Recover only some files

    Gosh, you have remove a important file by error. Thus, you just need to restore it, not the rest of the full and differential backups.

    First method:

    We could as previously, try all archive starting from the full backup up to the most recent differential backup, and restore just the file if it is present in the archive:

    dar -R / -x /usb/linux_full -g home/denis/my_precious_file

    This would restore only the file /home/denis/my_precious_file from the full backup.

    OK, now we would also have to restore from all differential backup the same way we did. Of course, this file may have changed since the full backup.

    dar -R / -x /usb/linux_diff1 -g home/denis/my_precious_file

    and so on, up to the last differential archive.

    dar -R / -x /usb/linux_diff29 -g home/denis/my_precious_file

    Second method (more efficient):

    We will restore our lost file, starting from the most recent differential backup and *maybe* up to the full backup. Our file may or may not be present in the a differential archive as it may have changed or not since the previous version, thus we have to check if our file is restored, using the -v option (verbose):

    dar -R / -x /usb/linux_diff29 -v -g home/denis/my_precious_file

    If we can see a line like this:

    restoring file: /home/denis/my_precious_file

    Then we are good. We can stops here, because we got the most recent backup version of our lost file. Otherwise we have to continue with the previous differential backup, up to the full backup if necessary. This method has an advantage over the first one, which is not to have *in all case* the need to use all the backup done since the full backup.

    OK, now you have two files to restore. No problem, just do the second method but add -r option not to override any more recent file already restored in a previous step:

    dar -x /usb key/linux_diff29 -R / -r -v -g home/denis/my_precious_file -g etc/fstab

    Check the output to see if one or both of your files got restored. If not, continue with the previous backup, up to the time you have seen for each file a line indicating it has been restored. Note that the most recent version of each files may not be located in the same archive, thus you might get /etc/fstab restored from linux_diff28, and /home/denis/my_precious_file restored at linux_diff27. In the case /etc/fstab is also present in linux_diff27 it would not have been overwritten by an older version, thanks to the -r option.

    This option is very important when restoring more than one file using the second method. Instead, in the first method is used (restoring first from the full backup, then from all the following differential backups), -r option is not so important because if overwriting occurs when you restore lost files, you would only overwrite an older version by a newer.

    Third method (for the lay guys like me)

    If you are lazy (as I am) have a look at dar_manager (at the end of the tutorial), it relies on a database that compile the content of all of your backups. You can then ask dar_manager a particular file, files or even directories, it will look in which backup to fetch them from and will invoke dar for you on the correct backup and file set.

    Isolating a "catalogue"

    We have seen previously how to do differential backups. Doing so, dar asks the last slice of the archive of reference. This operation is required to read the table of contents (also known as "catalogue" [this is a French word that means "catalog" in English, I will keep this French word in the following because it is also the name of the C++ class used in libdar]) which is located at the end of the archive (thus on the last slice(s)). You have the possibility to isolate (that's it to extract) a copy of this table of content to a small file. This small file is quite exactly the same as a differential archive that holds no data in it. Let's take an example with the full backup we did previously to see how to extract a catalogue:

    dar -C /root/CAT_linux_full -A /mnt/usb/linux_full -z

    Note here that we used the UPPERCASE 'C' letter, by opposition the the lowercase 'c' which is used for archive creation, here we just created an isolated catalogue, which is usually a small archive. In addition, you can use -z option to have it compressed, -s and -S option to have it split in slices, -p option, -b option, but for an isolated catalogue this is not often necessary as it is usually rather small. The only thing we have seen for backup that you will not be able to do for isolation is to filter files (-X, -I, -g, -P, -[ and -] option are not available for that operation).

    So what, now we have our extracted catalogue, what can we do with it? Two things:

    First

    we can use the extracted catalogue in place of the archive, as reference for a differential backup. No need to manipulate the old usb key, you can store the last's backup isolated catalogue on your hard disk instead and use it as reference for the next backup. If we had used an isolated catalogue in the previous examples, we would have built our first differential backup this way (note that here we have chose to use the CAT_ prefix to indicate that the archive is an isolated catalogue, but the choice is yours to label isolated catalogue the way you want):

    dar -c linux_diff1 -A /root/CAT_linux_full ... (other options seen above stay the same)
    Second

    we can use the isolated catalogue as backup of the internal catalogue if it get corrupted. Well to face to data corruption the best solution ever invented is Parchive, an autonomous program that builds parity file (same mechanism as the one used for RAID disks) for a given file. Here we can use Parchive to create a parity file for each slice. So, assuming you lack Parchive, and that you failed reading the full backup because the usb key is corrupted in the part used to store the internal catalogue, you can use an isolated catalogue as rescue:

    dar -x linux_full -A /root/CAT_linux_full ... dar -d linux_full -A /root/CAT_linux_full ... dar -t linux_full -A /root/CAT_linux_full ... dar -l /root/CAT_linux_full

    An isolated catalogue can be built for any type of archive (full, differential or incremental archive, even for an already isolated catalogue, which I admit is rather useless). You can also create an isolated catalogue at the same time you do a backup, thanks to the -@ option:

    dar -c linux_diff1 -A /mnt/usb key/linux_full -@ CAT_linux_diff1 ... (other options...) dar -c linux_full -@ CAT_linux_full ... (other options see above stay the same for backup)

    This is know as "on-fly" isolation.

    Dar_manager tutorial

    dar_manager builds a database of all your archive contents, to automatically restore the latest versions of a given set of files. Dar_manager is not targeted to the restoration a whole filesystem, the best ways to restore a whole filesystem has been described above and does not rely on dar_manager. So let's use dar_manager to restore a set of files or a whole directory. First, we have to create a "database" file:

    dar_manager -C my_base.dmd

    This created a file "my_base.dmd" where dmd stands for Dar Manager Database, but you are free to use any other extension.

    This database is created empty. Each time you make a backup, may it be full or differential, you will have to add its table of contents (aka "catalogue") to this database using the  following command:

    dar_manager -B my_base.dmd -A /mnt/usb/linux_full

    This will add ("A" stands for "add") the archive contents to the base. In some cases you may not have the archive available but its extracted catalogue instead. Of course, you can use the extracted catalogue in place of the archive!

    dar_manager -B my_base.dmd -A ~/Catalogues/CAT_linux_full

    The problem however is that when dar_manager will need to recover a file located in this archive it will try to open the archive ~/Catalogue/CAT_linux_full for restoration, which does not contain any data because it is just the catalogue of the archive.

    No problem in that case, thanks to the -b option we can change afterward the basename of the archive, and thanks to the -p option you can change afterward the path at any time. Let's now list the database contents:

    dar_manager -B my_base.dmd -l

    It shows the following:

    dar path : dar options : archive # | path | basename ------------+--------------+---------------   1 /home/denis/Catalogues CAT_linux_full

    We should change the path of archive number 1 for dar_manager looks on the usb key drive:

    dar_manager -B my_base.dmd -p 1 /mnt/usb

    ...and also replace the name of the extracted catalogue by the real archive name

    dar_manager -B my_base.dmd -b 1 linux_full

    Now we have exactly the same database as if we had use the real archive instead of its catalogue:

    dar_manager -B my_base.dmd -l dar path : dar options : archive # | path | basename ------------+--------------+---------------   1 /mnt/usb linux_full

    In place of using -b and -p options, you can also tell the path and the name of the real archive to use at restoration time, when you add the catalogue to the database:

    dar_manager -B my_base.dmd -A ~/Catalogues/CAT_linux_full /mnt/usb/linux_full

    This is done adding an optional argument. The first ~/Catalogue... is the archive where to read the catalogue from, and the second /mnt/usb/... is the name to keep for it. No access is done to this second archive at the time of the addition, thus it may stay unavailable at the time the command is typed.

    You can add up to 65534 archives to a given database, and have as much base as you want.

    Note that we did not yet gave important options in the database to be passed to dar. For example, you will likely restore from the root of your filesystem, therefore when called from dar_manager, dar must get the "-R /" option. This is done with:

    dar_manager -B my_base.dmd -o -R /

    All that follows -o is passed to dar as-is. You can see the options passed to dar when listing the database contents (-l option).

    Let's now suppose that after each backup you took the time to update your database, and you now just have removed an important file by mistake.

    We can restore our /home/denis/my/precious/file using dar_manager that way:

    dar_manager -B my_base.dmd -r home/denis/my/precious/file

    dar_manager will find the proper archive to use, and call dar with the following options: dar -x archive -R / -g home/denis/my/precious/file which in turn will ask you the corresponding slices. If you want to restore more files at a time or even a directory tree, you can add several arguments after -r option of dar_manager:

    dar_manager -B my_base.dmd -r home/denis/my/precious/file etc/fstab home/joe

    Once an archive become obsolete you can delete it from the database thanks to the -D option, you can also change archive order (-m option), get a list in which is located a given file (-f option), get the list of most recent files in a given archive (-u option), and get overall statistics per archive (-s option). Lastly you can specify which dar command to use given its path (-d option), by default, dar_manager uses the PATH shell variable to choose the dar command.

    A new feature for those that are really very lazy (still as I am myself): dar_manager has an interactive mode, so you don't have to remeber all these command-line switches except one:

    dar_manager -B my_base.dmd -i

    Interactive mode allow you to do all operation except restoration which can be done as previously explained.

    To go further with dar/libdar

    Well, we have reached the end of this tutorial, but dar/libdar has still a lot of features to be discovered:

    • strong encryption
    • archive merging
    • decremental backup
    • dar command-line files (DCF)
    • user commands between slices (and DUC files)
    • Extended Attribute manipulations
    • hard links
    • Sparse files
    • remote backup over ssh
    • suspending/resuming a database from dar before/after backing it up
    • using regex in place of glob expressions in masks
    • using dar with tape thanks to the sequential reading mode
    • having dar adding padded zeros to slice numbers
    • excluding some files from compression
    • asking dar to retry saving a file if it changes a the time of the backup
    • what is a "dirty" files in a dar archive
    • listing an archive contents under XML format
    • using conditional syntax in DCF files
    • using user targets
    • adding user comments in dar archive
    • using DAR_DCF_PATH and DAR_DUC_PATH environment variables
    • truncated archive repairing

    all this is described in much details in the following documents:

    You can also find document starting from the feature point of view using the feature description page. However if you find something unclear, feel free to report or ask for help ondar-support mailing-list. dar-2.7.17/doc/Benchmark_tools/0000755000175000017520000000000014767510033013246 500000000000000dar-2.7.17/doc/Benchmark_tools/bitflip0000755000175000017520000000071414403564520014544 00000000000000#!/bin/bash if [ -z "$1" -o -z "$2" ] ; then echo "usage: $0 " echo "flip the bit of the file located at the provided offset" exit 1 fi offbit=$1 file="$2" offbyte=$(( $offbit / 8 )) bitinbyte=$(( $offbit - ($offbyte * 8) )) readbyte=`xxd -s $offbyte -p -l 1 "$file"` mask=$(( 1 << $bitinbyte )) newbyte=$(( 0x$readbyte ^ $mask )) hexanewbyte=`printf "%.2x" $newbyte` echo $hexanewbyte | xxd -p -l 1 -s $offbyte -r - "$file" dar-2.7.17/doc/Benchmark_tools/build_test_tree.bash0000755000175000017520000000240414403564520017202 00000000000000#!/bin/bash if [ -z "$1" ] ; then echo "usage: $0 " exit 1 fi if [ -e "$1" ] ; then echo "$1 already exists, remove it or use another directory name" exit 1 fi if ! dar -V > /dev/null ; then echo "need dar to copy unix socket to the test tree" exit 1 fi mkdir "$1" cd "$1" # creating mkdir "SUB" dd if=/dev/zero of=plain_zeroed bs=1024 count=1024 dd if=/dev/urandom of=random bs=1024 count=1024 dd if=/dev/zero of=sparse_file bs=1 count=1 seek=10239999 ln -s random SUB/symlink-broken ln -s ../random SUB/symlink-valid mkfifo pipe mknod null c 3 1 mknod fd1 b 2 1 dar -c - -R / -g dev/log -N -Q -q | dar -x - --sequential-read -N -q -Q ln sparse_file SUB/hard_linked_sparse_file ln dev/log SUB/hard_linked_socket ln pipe SUB/hard_linked_pipe # modifying dates and permissions sleep 2 chown nobody random chown -h bin SUB/symlink-valid chgrp -h daemon SUB/symlink-valid sleep 2 echo hello >> random sleep 2 cat < random > /dev/null # adding Extend Attributes, assuming the filesystem as user_xattr and acl option set setfacl -m u:nobody:rwx plain_zeroed && setfattr -n "user.hello" -v "hello world!!!" plain_zeroed || (echo "FAILED TO CREATE EXTENDED ATTRIBUTES" && exit 1) # adding filesystem specific attributes chattr +dis plain_zeroed dar-2.7.17/doc/Benchmark_tools/historization_feature0000755000175000017520000000111514403564520017530 00000000000000#!/bin/bash if [ -z "$1" -o -z "$2" ] ; then echo "usage: $0

    {phase1 | phase2}" exit 1 fi dir="$1" phase="$2" case "$phase" in phase1) if [ -e "$dir" ] ; then echo "$dir exists, remove it first" exit 2 fi mkdir "$dir" echo "Hello World!" > "$dir/A.txt" echo "Bonjour tout le monde !" > "$dir/B.txt" ;; phase2) if [ ! -d "$dir" ] ; then echo "$dir does not exist or is not a directory, run phase1 first" exit 2 fi rm -f "$dir/A.txt" echo "Buongiorno a tutti !" > "$dir/C.txt" ;; *) echo "unknown phase" exit 2 ;; esac dar-2.7.17/doc/Benchmark_tools/README0000644000175000017520000000011214403564520014035 00000000000000this directory contains scripts used for setting up the ../benchmark.html dar-2.7.17/doc/Benchmark_tools/always_change0000755000175000017520000000017014403564520015714 00000000000000#!/bin/bash if [ -z "$1" ] ; then echo "usage: $0 " exit 1 fi while /bin/true ; do touch "$1" ; done dar-2.7.17/doc/Benchmark_tools/hide_change0000755000175000017520000000063014403564520015326 00000000000000#!/bin/bash if [ -z "$1" ] ; then echo "usage: $0 []" echo "modify one bit and hide the change" exit 1 fi atime=`stat "$1" | sed -rn -s 's/^Access:\s+(.*)\+.*/\1/p'` mtime=`stat "$1" | sed -rn -s 's/^Modify:\s+(.*)\+.*/\1/p'` bitoffset="$2" if [ -z "$bitoffset" ] ; then bitoffset=2 fi ./bitflip "$bitoffset" "$1" touch -d "$mtime" "$1" touch -a -d "$atime" "$1" dar-2.7.17/doc/Doxyfile0000644000175000017520000030777614740171676011615 00000000000000# Doxyfile 1.8.8 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all text # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv # for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = "Disk ARchive" # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = "##VERSION##" # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = "Full featured and portable backup and archiving tool" # With the PROJECT_LOGO tag one can specify an logo or icon that is included in # the documentation. The maximum height of the logo should not exceed 55 pixels # and the maximum width should not exceed 200 pixels. Doxygen will copy the logo # to the output directory. PROJECT_LOGO = ##SRCDIR##/doc/dar_doc.jpg # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = ##BUILDDIR##/doc # If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub- # directories (in 2 levels) under the output directory of each output format and # will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. # The default value is: NO. CREATE_SUBDIRS = NO # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, # Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = "The $name class" \ "The $name widget" \ "The $name file" \ is \ provides \ specifies \ contains \ represents \ a \ an \ the # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = NO # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = NO # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a # new page for each member. If set to NO, the documentation of a member will be # part of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 4 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:\n" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". You can put \n's in the value part of an alias to insert # newlines. ALIASES = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, Javascript, # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: # Fortran. In the later case the parser tries to guess whether the code is fixed # or free formatted code, this is the default for Fortran type files), VHDL. For # instance to make doxygen treat .inc files as Fortran files (default is PHP), # and .f files as C (default is Fortran), use: inc=Fortran f=C. # # Note For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by by putting a % sign in front of the word # or globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = YES # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES, then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = NO # If the EXTRACT_PRIVATE tag is set to YES all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = YES # If the EXTRACT_PACKAGE tag is set to YES all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = NO # If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined # locally in source files will be included in the documentation. If set to NO # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. When set to YES local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO these classes will be included in the various overviews. This option has # no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = YES # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # (class|struct|union) declarations. If set to NO these declarations will be # included in the documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = YES # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file # names in lower-case letters. If set to YES upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. # The default value is: system dependent. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = NO # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = YES # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = NO # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the # todo list. This list is created by putting \todo commands in the # documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the # test list. This list is created by putting \test commands in the # documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if ... \endif and \cond # ... \endcond blocks. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES the list # will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = NO # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = NO # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some parameters # in a documented function, or documenting parameters that don't exist or using # markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO doxygen will only warn about wrong or incomplete parameter # documentation, but not about the absence of documentation. # The default value is: NO. WARN_NO_PARAMDOC = NO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. # Note: If this tag is empty the current directory is searched. INPUT = # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: http://www.gnu.org/software/libiconv) for the list of # possible encodings. # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank the # following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii, # *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp, # *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown, # *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, # *.qsf, *.as and *.js. FILE_PATTERNS = *.h \ *.hpp # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = YES # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = doc \ intl \ m4 \ man \ misc \ po \ src/testing \ src/check \ config\.h \ gettext.h \ my_config\.h # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = * # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = doc/dar_s_doc.jpg # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # # # where is the value of the INPUT_FILTER tag, and is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER ) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = YES # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # function all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = YES # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = YES # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES, then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see http://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the config file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = NO # If the CLANG_ASSISTED_PARSING tag is set to YES, then doxygen will use the # clang parser (see: http://clang.llvm.org/) for more accurate parsing at the # cost of reduced performance. This can be particularly helpful with template # rich C++ code for which doxygen's built-in parser lacks the necessary type # information. # Note: The availability of this option depends on whether or not doxygen was # compiled with the --with-libclang option. # The default value is: NO. CLANG_ASSISTED_PARSING = NO # If clang assisted parsing is enabled you can provide the compiler with command # line options that you would normally use when invoking the compiler. Note that # the include paths will already be set by doxygen for the files and directories # specified with INPUT and INCLUDE_PATH. # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. CLANG_OPTIONS = #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = YES # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefor more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra stylesheet files is of importance (e.g. the last # stylesheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the stylesheet and background images according to # this color. Hue is specified as an angle on a colorwheel, see # http://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use grayscales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to NO can help when comparing the output of multiple runs. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = YES # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = YES # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: http://developer.apple.com/tools/xcode/), introduced with # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a # Makefile in the HTML output directory. Running make will produce the docset in # that directory and running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on # Windows. # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler ( hhc.exe). If non-empty # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated ( # YES) or that it should be included in the master .chm file ( NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated ( # YES) or a normal table of contents ( NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- # folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location of Qt's # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the # generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can # further fine-tune the look of the index. As an example, the default style # sheet generated by doxygen has an example that shows how to put an image at # the root of the tree instead of the PROJECT_NAME. Since the tree basically has # the same information as the tab index, you could consider setting # DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # # Note that when changing this option you need to delete any form_*.png files in # the HTML output directory before the changes have effect. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # http://www.mathjax.org) which uses client side Javascript for the rendering # instead of using prerendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # When MathJax is enabled you can set the default output format to be used for # the MathJax output. See the MathJax site (see: # http://docs.mathjax.org/en/latest/output.html) for more details. # Possible values are: HTML-CSS (which is slower, but has the best # compatibility), NativeMML (i.e. MathML) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from http://www.mathjax.org before deployment. # The default value is: http://cdn.mathjax.org/mathjax/latest. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use + S # (what the is depends on the OS and browser, but it is typically # , /