Últimos Cambios |
||
Blog personal: El hilo del laberinto |
Última Actualización: 11 de abril de 2013
En este documento abundo en mi experiencia con "Live Upgrade". En este caso describo la migración de Solaris 10 Update 10 (8/11) a Solaris 10 Update 11 (01/13).
Para comprender completamente este documento, habría que leerse los artículos anteriores sobre este tema. Para ir limpiando el asunto sólo enlazo a la actualización anterior. Recomiendo leer toda la serie sobre "Live Upgrade":
La nueva actualización de Solaris es muy "light". Se nota que Oracle quiere que migremos a Solaris 11. Algunas mejoras son: Nueva versión de ZFS (mejora de velocidad en "zfs list", soporte de bloques de hasta 1MB), instalación sobre un iSCSI, "pre flight checker" para "Live Upgrade" y zonas, nuevos "drivers", mejoras notables en el rendimiento de SSH y derivados (doy fe, hay que ver lo que cunde modificar el tamaño de la ventana TCP), mejoras para máquinas "serias" (reconfiguración de memoria, procesador de servicio, etc), nuevo software, etc.
Recordemos que las grandes ventajas de usar "Live Upgrade" son:
Los pasos para actualizar nuestro sistema mediante "Live Upgrade" son los siguientes:
(el primer paso a partir de ahora debería ser "lupc", "Live Upgrade PreFlight Check") # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes yes yes no - Solaris10u10BACKUP yes no no yes - # ludelete Solaris10u10BACKUP System has findroot enabled GRUB Checking if last BE on any disk... BE <Solaris10u10BACKUPL> is not the last BE on any disk. No entry for BE <Solaris10u10BACKUP> in GRUB menu Determining the devices to be marked free. Updating boot environment configuration database. Updating boot environment description database on all BEs. Updating all boot environment configuration databases. Boot environment <Solaris10u10BACKUP> deleted. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes yes yes no -
A continuación clonamos el BE actual, para poder actualizarlo. Usamos "BACKUP" porque lo clonaremos luego, dejando este como "BACKUP":
# time lucreate -n Solaris10u11BACKUP Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c4t2d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <Solaris10u11BACKUP>. Source boot environment is <Solaris10u10>. Creating file systems on boot environment <Solaris10u11BACKUP>. Populating file systems on boot environment <Solaris10u11BACKUP>. Temporarily mounting zones in PBE <Solaris10u10>. Analyzing zones. Duplicating ZFS datasets from PBE to ABE. Creating snapshot for <sistema/ROOT/Solaris10u10> on <sistema/ROOT/Solaris10u10@Solaris10u11BACKUP>. Creating clone for <sistema/ROOT/Solaris10u10@Solaris10u11BACKUP> on <sistema/ROOT/Solaris10u11BACKUP>. Creating snapshot for <sistema/ROOT/Solaris10u10/var> on <sistema/ROOT/Solaris10u10/var@Solaris10u11BACKUP>. Creating clone for <sistema/ROOT/Solaris10u10/var@Solaris10u11BACKUP> on <sistema/ROOT/Solaris10u11BACKUP/var>. Creating snapshot for <sistema/ROOT/Solaris10u10/zones> on <sistema/ROOT/Solaris10u10/zones@Solaris10u11BACKUP>. Creating clone for <sistema/ROOT/Solaris10u10/zones@Solaris10u11BACKUP> on <sistema/ROOT/Solaris10u11BACKUP/zones>. Creating snapshot for <sistema/ROOT/Solaris10u10/zones/babylon5> on <sistema/ROOT/Solaris10u10/zones/babylon5@Solaris10u11BACKUP>. Creating clone for <sistema/ROOT/Solaris10u10/zones/babylon5@Solaris10u11BACKUP> on <sistema/ROOT/Solaris10u11BACKUP/zones/babylon5>. Creating snapshot for <sistema/ROOT/Solaris10u10/zones/stargate> on <sistema/ROOT/Solaris10u10/zones/stargate@Solaris10u11BACKUP>. Creating clone for <sistema/ROOT/Solaris10u10/zones/stargate@Solaris10u11BACKUP> on <sistema/ROOT/Solaris10u11BACKUP/zones/stargate>. Mounting ABE <Solaris10u11BACKUP>. Generating file list. Finalizing ABE. Fixing zonepaths in ABE. Unmounting ABE <Solaris10u11BACKUP>. Fixing properties on ZFS datasets in ABE. Reverting state of zones in PBE <Solaris10u10>. Making boot environment <Solaris10u11BACKUP> bootable. WARNING: split filesystem </> file system type <zfs> cannot inherit mount point options <-> from parent filesystem </> file type <-> because the two file systems have different types. Updating bootenv.rc on ABE <Solaris10u11BACKUP>. /.alt.tmp.b-V1b.mnt/sistema/ROOT/Solaris10u11BACKUP: No such file or directory Saving existing file </boot/grub/menu.lst> in top level dataset for BE <Solaris10u11BACKUP> as <mount-point>//boot/grub/menu.lst.prev. File </boot/grub/menu.lst> propagation successful Copied GRUB menu from PBE to ABE No entry for BE <Solaris10u11BACKUP> in GRUB menu Population of boot environment <Solaris10u11BACKUP> successful. Creation of boot environment <Solaris10u11BACKUP> successful. real 1m19.279s user 0m18.516s sys 0m20.762s # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes yes yes no - Solaris10u11BACKUP yes no no yes -
Como puede verse, el clonado del BE actual, si estamos bajo ZFS, lleva apenas unos pocos segundos. Y la máquina está bastante cargada en estos momentos.
El siguiente paso consiste en actualizar el sistema operativo en el nuevo BE. Para ello copio la imagen ISO en "/tmp", la monto y actualizo desde ella:
# lofiadm -a /tmp/sol-10-u11-ga-x86-dvd.iso /dev/lofi/1 # mkdir /tmp/sol-10-u11-ga-x86-dvd # mount -o ro -F hsfs /dev/lofi/1 /tmp/sol-10-u11-ga-x86-dvd # time luupgrade -n Solaris10u11BACKUP -u -s /tmp/sol-10-u11-ga-x86-dvd/ System has findroot enabled GRUB No entry for BE <Solaris10u11BACKUP> in GRUB menu Copying failsafe kernel from media. 64995 blocks miniroot filesystem is <lofs> Mounting miniroot at </tmp/sol-10-u11-ga-x86-dvd//Solaris_10/Tools/Boot> INFORMATION: Auto Registration already done for this BE <Solaris10u11BACKUP>. Validating the contents of the media </tmp/sol-10-u11-ga-x86-dvd/>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <10>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <Solaris10u11BACKUP>. Checking for GRUB menu on ABE <Solaris10u11BACKUP>. Saving GRUB menu on ABE <Solaris10u11BACKUP>. Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE <Solaris10u11BACKUP>. Performing the operating system upgrade of the BE <Solaris10u11BACKUP>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Restoring GRUB menu on ABE <Solaris10u11BACKUP>. Updating package information on boot environment <Solaris10u11BACKUP>. Package information successfully updated on boot environment <Solaris10u11BACKUP>. Adding operating system patches to the BE <Solaris10u11BACKUP>. The operating system patch installation is complete. ABE boot partition backing deleted. ABE GRUB is newer than PBE GRUB. Updating GRUB. GRUB update was successfull. INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <Solaris10u11BACKUP> contains a log of the upgrade operation. INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <Solaris10u11BACKUP> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <Solaris10u11BACKUP>. Before you activate boot environment <Solaris10u11BACKUP>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <Solaris10u11BACKUP> is complete. Creating miniroot device Configuring failsafe for system. Failsafe configuration is complete. Installing failsafe Failsafe install is complete. real 25m28.790s user 11m6.120s sys 8m29.443s # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes yes yes no - Solaris10u11BACKUP yes no no yes - # umount /tmp/sol-10-u11-ga-x86-dvd # lofiadm -d /dev/lofi/1
El comentario sobre "CAUTION: Interrupting this process may leave the boot environment unstable or unbootable." se refiere al nuevo BE (el que estamos actualizando a Solaris 10 Update 11BACKUP), no al BE actualmente en ejecución. Es decir, la actualización es segura. Si hay problemas, simplemente borramos el BE nuevo y volvemos a probar.
La actualización es rápida, menos de 26 minutos, durante los cuales la máquina sigue en producción.
Reiniciemos:
# luactivate Solaris10u11BACKUP System has findroot enabled GRUB Generating boot-sign, partition and slice information for PBE <Solaris10u10> A Live Upgrade Sync operation will be performed on startup of boot environment <Solaris10u11BACKUP>. Generating boot-sign for ABE <<Solaris10u11BACKUP> Saving existing file </etc/bootsign> in top level dataset for BE <Solaris10u11BACKUP> as <mount-point>//etc/bootsign.prev. Generating partition and slice information for ABE <Solaris10u11BACKUP> Copied boot menu from top level dataset. Generating multiboot menu entries for PBE. Generating multiboot menu entries for ABE. Disabling splashimage Re-enabling splashimage No more bootadm entries. Deletion of bootadm entries is complete. GRUB menu default setting is unaffected Done eliding bootadm entries. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Boot from the Solaris failsafe or boot in Single User mode from Solaris Install CD or Network. 2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following commands in sequence to mount the BE: zpool import sistema zfs inherit -r mountpoint sistema/ROOT/Solaris10u10 zfs set mountpoint=<mountpointName> sistema/ROOT/Solaris10u10 zfs mount sistema/ROOT/Solaris10u10 3. Run <luactivate> utility with out any arguments from the Parent boot environment root slice, as shown below: <mountpointName>/sbin/luactivate 4. luactivate, activates the previous working boot environment and indicates the result. 5. Exit Single User mode and reboot the machine. ********************************************************************** Modifying boot archive service Propagating findroot GRUB for menu conversion. File </etc/lu/installgrub.findroot> propagation successful File </etc/lu/stage1.findroot> propagation successful File </etc/lu/stage2.findroot> propagation successful File </etc/lu/GRUB_capability> propagation successful Deleting stale GRUB loader from all BEs. File </etc/lu/installgrub.latest> deletion successful File </etc/lu/stage1.latest> deletion successful File </etc/lu/stage2.latest> deletion successful Activation of boot environment <Solaris10u11BACKUP> successful. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes yes no no - Solaris10u11BACKUP yes no yes no - # init 6
Una vez reiniciados, comprobamos que estamos ejecutando la versión correcta:
# cat /etc/release Oracle Solaris 10 1/13 s10x_u11wos_24a X86 Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved. Assembled 17 January 2013 # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes no no yes - Solaris10u11BACKUP yes yes yes no -
Ahora que parece que todo funciona, clonamos el BE para meterlo en producción de verdad, manteniendo el "backup":
# time lucreate -n Solaris10u11 [...] # luactivate Solaris10u11 [...] # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u10 yes no no yes - Solaris10u11BACKUP yes yes no no - Solaris10u11 yes no yes no -
Reiniciamos el ordenador, y ya hemos terminado.
Los cambios más reseñables respecto a la anterior actualización son:
Una vez que pasan unos días y vemos que todo funciona sin problemas, queremos aprovechar todas las mejoras de la nueva versión de Solaris. En particular las mejoras en ZFS.
# zpool upgrade -a This system is currently running ZFS pool version 32. Successfully upgraded 'datos' Successfully upgraded 'sistema' # zfs upgrade -a 0 filesystems upgraded 79 filesystems already at this version # init 6
Actualizamos el "zpool" y los "datasets", y reiniciamos el equipo. Nos interesa reiniciar para asegurarnos de que la versión de GRUB que tenemos soporta esta versión de ZFS, ya que se supone que la actualización de sistema actualizó también el GRUB, pero he tenido problemas al respecto en el pasado.
Aparte de solucionar bugs, etc, ¿qué mejoras tenemos en ZFS?. Recomiendo leerse con atención el documento de Oracle. Algunos detalles:
# zpool upgrade -v This system is currently running ZFS pool version 32. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Reserved 22 Received properties 23 Slim ZIL 24 System attributes 25 Improved scrub stats 26 Improved snapshot deletion performance 27 Improved snapshot creation performance 28 Multiple vdev replacements 29 RAID-Z/mirror hybrid allocator 30 Reserved 31 Improved 'zfs list' performance 32 One MB blocksize For more information on a particular version, including supported releases, see the ZFS Administration Guide.
La versión 30 es el cifrado ZFS disponible en Solaris 11, pero Solaris 10 no lo soporta. La versión 31 hace que un "zfs list" sea prácticamente instantaneo, incluso en casos como el mío:
# zfs list | wc -l 2292
Por último, la versión 32 permite definir un tamaño de bloque de hasta 1MB, en vez del máximo de 128KB tradicional. Interesante si tenemos ficheros grandes que se modifican poco y, sobre todo, se almacenan sobre RAIDZ/Z2/Z3.
Más información sobre los OpenBadges
Donación BitCoin: 19niBN42ac2pqDQFx6GJZxry2JQSFvwAfS