Últimos Cambios |
||
Blog personal: El hilo del laberinto |
Última Actualización: 03 de noviembre de 2008 - Lunes
En este documento abundo en mi experiencia con "Live Upgrade". En este caso describo la migración de Solaris 10 Update 5 (5/08) a Solaris 10 Update 6 (10/08).
Para comprender completamente este documento, habría que leerse los artículos anteriores:
Esta nueva actualización de Solaris incluye algunas mejoras muy importantes como, ¡por fin!, el soporte de ZFS como "root" y como "boot". En general el sistema ZFS se ha mejorado mucho (por ejemplo, compresión gzip, "copies", etc). Es buena idea, no obstante, no utilizar dichas funcionalidades aún, porque de hacerlo no podremos volver al Update 5 si tenemos problemas. Lo ideal es actualizar el sistema a Update 6, sin aprovechar las nuevas funcionalidades y, tras un tiempo prudencial sin haber experimentado ningún problema, empezar a utilizarlas. Éste es el enfoque que voy a emplear.
Mi estado actual es:
[root@tesalia /]# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u4 yes no no yes - Solaris10u5 yes yes yes no -
El primer paso es eliminar un Boot Environment antiguo, para hacer sitio para la nueva actualización:
[root@tesalia /]# ludelete Solaris10u4 The boot environment <Solaris10u4> contains the GRUB menu. Attempting to relocate the GRUB menu. /usr/sbin/ludelete: lulib_relocate_grub_slice: not found ERROR: Cannot relocate the GRUB menu in boot environment <Solaris10u4>. ERROR: Cannot delete boot environment <Solaris10u4>. Unable to delete boot environment.
Vaya, parece que este problema no se ha solucionado aún en Solaris 10 Update 5. De momento habrá que resolver la situación a mano. Tomo nota de la tabla de particiones del BE antiguo, para reutilizarlas:
/dev/md/dsk/d5 3100663 2339663 698987 77% /.alt.Solaris10u4 /dev/md/dsk/d4003 481199 301502 131578 70% /.alt.Solaris10u4/usr/openwin /dev/md/dsk/d4004 239407 115673 99794 54% /.alt.Solaris10u4/usr/dt /dev/md/dsk/d4006 337071 217326 86038 72% /.alt.Solaris10u4/usr/jdk /dev/md/dsk/d4005 5441534 4721761 665358 88% /.alt.Solaris10u4/var/sadm /dev/md/dsk/d4007 1786711 1462553 270557 85% /.alt.Solaris10u4/opt/sfw /dev/md/dsk/d4023 674159 476473 137012 78% /.alt.Solaris10u4/usr/sfw /dev/md/dsk/d4008 674159 561785 51700 92% /.alt.Solaris10u4/usr/staroffice7 swap 32991268 0 32991268 0% /.alt.Solaris10u4/var/run swap 32991268 0 32991268 0% /.alt.Solaris10u4/tmp
Resuelto el problema, eliminamos el BE antiguo:
[root@tesalia /]# ludelete Solaris10u4 Determining the devices to be marked free. Updating boot environment configuration database. Updating boot environment description database on all BEs. Updating all boot environment configuration databases. Updating GRUB menu on device </dev/md/dsk/d0> Boot environment <Solaris10u4> deleted.
A continuación clonamos el BE actual, para poder actualizarlo. El cambio más importante es el path de StarOffice 8:
[root@tesalia /]# cat z-live_upgrade-Solaris10u6 lucreate -n Solaris10u6 -m /:/dev/md/dsk/d5:ufs \ -m /usr/openwin:/dev/md/dsk/d4003:ufs \ -m /usr/dt:/dev/md/dsk/d4004:ufs \ -m /var/sadm:/dev/md/dsk/d4005:ufs \ -m /usr/jdk:/dev/md/dsk/d4006:ufs \ -m /opt/sfw:/dev/md/dsk/d4007:ufs \ -m /opt/staroffice8:/dev/md/dsk/d4008:ufs \ -m /usr/sfw:/dev/md/dsk/d4023:ufs [root@tesalia /]# time ./z-live_upgrade-Solaris10u6 Discovering physical storage devices Discovering logical storage devices Cross referencing storage devices with boot environment configurations Determining types of file systems supported Validating file system requests Preparing logical storage devices Preparing physical storage devices Configuring physical storage devices Configuring logical storage devices Analyzing system configuration. Comparing source boot environment <Solaris10u5> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Searching /dev for possible boot environment filesystem devices Updating system configuration files. The device </dev/dsk/c1d0s5> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <Solaris10u6>. Source boot environment is <Solaris10u5>. Creating boot environment <Solaris10u6>. Checking for GRUB menu on boot environment <Solaris10u6>. The boot environment <Solaris10u6> does not contain the GRUB menu. Creating file systems on boot environment <Solaris10u6>. Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d5>. Creating <ufs> file system for </opt/sfw> in zone <global> on </dev/md/dsk/d4007>. Creating <ufs> file system for </opt/staroffice8> in zone <global> on </dev/md/dsk/d4008>. Creating <ufs> file system for </usr/dt> in zone <global> on </dev/md/dsk/d4004>. Creating <ufs> file system for </usr/jdk> in zone <global> on </dev/md/dsk/d4006>. Creating <ufs> file system for </usr/openwin> in zone <global> on </dev/md/dsk/d4003>. Creating <ufs> file system for </usr/sfw> in zone <global> on </dev/md/dsk/d4023>. Creating <ufs> file system for </var/sadm> in zone <global> on </dev/md/dsk/d4005>. Mounting file systems for boot environment <Solaris10u6>. Calculating required sizes of file systems for boot environment <Solaris10u6>. Populating file systems on boot environment <Solaris10u6>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Populating contents of mount point </opt/sfw>. Populating contents of mount point </opt/staroffice8>. Populating contents of mount point </usr/dt>. Populating contents of mount point </usr/jdk>. Populating contents of mount point </usr/openwin>. Populating contents of mount point </usr/sfw>. Populating contents of mount point </var/sadm>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <Solaris10u6>. Creating compare database for file system </var/sadm>. Creating compare database for file system </usr/sfw>. Creating compare database for file system </usr/openwin>. Creating compare database for file system </usr/jdk>. Creating compare database for file system </usr/dt>. Creating compare database for file system </opt/sfw>. Creating compare database for file system </>. Updating compare databases on boot environment <Solaris10u6>. Making boot environment <Solaris10u6> bootable. Updating bootenv.rc on ABE <Solaris10u6>. Generating partition and slice information for ABE <Solaris10u6> Setting root slice to Solaris Volume Manager metadevice </dev/md/dsk/d5>. Population of boot environment <Solaris10u6> successful. Creation of boot environment <Solaris10u6> successful.
Clonar el sistema actual para crear el nuevo BE supone unas dos horas. Afortunadamente el proceso será instantaneo una vez que haya migrado a ZFS.
El siguiente paso consiste en actualizar el sistema operativo en el nuevo BE. Para ello copio la imagen ISO en "/tmp", la monto y actualizo desde ella:
[root@tesalia tmp]# lofiadm -a /tmp/sol-10-u6-ga-x86-dvd.iso /dev/lofi/1 [root@tesalia tmp]# mkdir /tmp/sol-10-u6-ga-x86-dvd [root@tesalia tmp]# mount -o ro -F hsfs /dev/lofi/1 /tmp/sol-10-u6-ga-x86-dvd [root@tesalia tmp]# luupgrade -n Solaris10u6 -u -s /tmp/sol-10-u6-ga-x86-dvd Copying failsafe multiboot from media. Uncompressing miniroot Creating miniroot device miniroot filesystem is <ufs> Mounting miniroot at </tmp/sol-10-u6-ga-x86-dvd/Solaris_10/Tools/Boot> Validating the contents of the media </tmp/sol-10-u6-ga-x86-dvd>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <10>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <Solaris10u6>. Checking for GRUB menu on ABE <Solaris10u6>. Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE <Solaris10u6>. Performing the operating system upgrade of the BE <Solaris10u6>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Deleted empty GRUB menu on ABE <Solaris10u6>. Updating package information on boot environment <Solaris10u6>. Package information successfully updated on boot environment <Solaris10u6>. Adding operating system patches to the BE <Solaris10u6>. The operating system patch installation is complete. ABE boot partition backing deleted. Configuring failsafe for system. Failsafe configuration is complete. INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <Solaris10u6> contains a log of the upgrade operation. INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <Solaris10u6> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <Solaris10u6>. Before you activate boot environment <Solaris10u6>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <Solaris10u6> is complete. Installing failsafe Failsafe install is complete. [root@tesalia tmp]# umount /tmp/sol-10-u6-ga-x86-dvd [root@tesalia tmp]# lofiadm -d /dev/lofi/1
Lo único que quedaría por hacer ahora es cambiar el BE de arranque y reiniciar el sistema. En mi caso particular, no obstante, tengo algunos problemas que hay que resolver primero:
Como mal menor, puedo arrancar el BE nuevo sin sistema de correo, para poder actualizarlo y probarlo un poco antes de activarlo. La forma más sencilla de lograrlo es eliminar el ejecutable del sistema responsable del correo, de forma que su lanzamiento falle. Para eso podemos utilizar "lumount Solaris10u6", para hacerlo visible y poder eliminarlo.
Hay que tener en cuenta que la actualización del sistema destroza por completo mi configuración de correo. Por tanto es conveniente realizar un backup previo y revisar la actualización con lupa, después.
ATENCIÓN: El procedimiento descrito no funciona, por limitaciones de GRUB. Ver una sección posterior.
Lo que hago es romper el "mirror" del nuevo BE y y unir ambas particiones en un RAID 0, para duplicar su tamaño. De esta forma pierdo la redundancia durante dos o tres días, lo que tarde en comprobar que todo funcione bien y migrarlo todo a ZFS boot/root:
[root@tesalia lib]# metastat d5 d5: Mirror Submirror 0: d51 State: Okay Submirror 1: d52 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 6297984 blocks (3.0 GB) d51: Submirror of d5 State: Okay Size: 6297984 blocks (3.0 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1d0s5 0 No Okay Yes d52: Submirror of d5 State: Okay Size: 6329610 blocks (3.0 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c2d0s5 32130 Yes Okay Yes Device Relocation Information: Device Reloc Device ID c1d0 Yes id1,cmdk@AST3250823AS=____________3ND1LKMP c2d0 Yes id1,cmdk@AWDC_WD7500AAKS-00RBA0=_____WD-WCAPT1056964 [root@tesalia lib]# metadb -d c2d0s5 [root@tesalia lib]# metadetach d5 d52 d5: submirror d52 is detached [root@tesalia lib]# metaclear d52 d52: Concat/Stripe is cleared [root@tesalia lib]# metattach d51 c2d0s5 d51: component is attached [root@tesalia /]# growfs /dev/md/rdsk/d5 /dev/md/rdsk/d5: Unable to find Media type. Proceeding with system determined parameters. /dev/md/rdsk/d5: 12652416 sectors in 1569 cylinders of 128 tracks, 63 sectors 6177.9MB in 121 cyl groups (13 c/g, 51.19MB/g, 6208 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 104928, 209824, 314720, 419616, 524512, 629408, 734304, 839200, 944096, 11643488, 11748384, 11853280, 11958176, 12063072, 12167968, 12272864, 12377760, 12482656, 12587552
Acabamos de duplicar de 3 a 6 gigabytes el espacio en el BE, a costa de perder su redundancia. Como he dicho, esto no es problema porque en un par de días, tras comprobar que todo funciona a la perfección, migraré todo a ZFS root/boot.
Ya solo queda activar el nuevo BE y reiniciar la máquina:
[root@tesalia /]# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u5 yes yes yes no - Solaris10u6 yes no no yes - [root@tesalia /]# luactivate Solaris10u6 A Live Upgrade Sync operation will be performed on startup of boot environment <Solaris10u6>. Generating partition and slice information for ABE <Solaris10u6> Boot menu exists. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Do *not* change *hard* disk order in the BIOS. 2. Boot from the Solaris Install CD or Network and bring the system to Single User mode. 3. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount: mount -Fufs /dev/dsk/c1d0s0 /mnt 4. Run <luactivate> utility with out any arguments from the Parent boot environment root slice, as shown below: /mnt/sbin/luactivate 5. luactivate, activates the previous working boot environment and indicates the result. 6. Exit Single User mode and reboot the machine. ********************************************************************** Modifying boot archive service GRUB menu is on device: </dev/md/dsk/d0>. Filesystem type for menu device: <ufs>. Activation of boot environment <Solaris10u6> successful. [root@tesalia /]# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris10u5 yes yes no no - Solaris10u6 yes no yes no - [root@tesalia /]# init 6 updating /platform/i86pc/boot_archive...this may take a minute
Una vez reiniciados, comprobamos que estamos ejecutando la versión correcta, recreo el servicio de correo y me aseguro de que todo funciona a la perfección:
[root@tesalia /]# cat /etc/release Solaris 10 10/08 s10x_u6wos_07b X86 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 27 October 2008
Los cambios más reseñables respecto a la anterior actualización son:
Esto supone, también, que durante unos días no voy a hacer "zpool upgrade"/ "zfs upgrade". De esta forma evitamos, temporalmente, el perder la capacidad de volver a Solaris 10 Update 5 en caso de detectar algún problema durante las pruebas.
Tras un par de días de experimentación puedo decir que todo funciona a la perfección, sin incidencias reseñables. Estupendo. Pero hay un problema. Grave.
GRUB no soporta el uso de RAID 0 en la partición de arraque. Por tanto, con mi "chanchullo" para duplicar el espacio asignado a "root", a veces (según dónde se hayan grabado las cosas) GRUB mostrará un error de que se intenta acceder al kernel fuera de la partición. Es de notar que el sistema en sí funciona perfectamente una vez que se ha cargado el kernel en memoria, ya que éste sí sabe cómo utilizar RAID 0 sin problemas.
Debido a este hecho, y a que parece que Solaris 10 update 6 está funcionando muy bien, decido eliminar el BE de Solaris 10 Update 5 y clonar en ese espacio mi configuración actual de Update 6. Dado que en dicho espacio no utilizo RAID 0, GRUB no debería tener problemas.
El único detalle que hay que tener en cuenta es que en cuanto elimine el BE de Update 5, me quedo sin red de seguridad. Esto es especialmente delicado, ya que tal y como tengo (mal) instalado Update 6 en un RAID 0, corro el riesgo de no poder reiniciar el sistema. Pero se trata de un riesgo calculado.
Los pasos a seguir, por tanto, son similares a los documentados en esta misma página. Basta con clonar el BE actual. No es necesario actualizar nada, porque el sistema ya está actualizado.
La primera sorpresa es que cuando intento eliminar el BE antiguo me encuentro con el siguiente mensaje:
[root@tesalia /]# ludelete Solaris10u5 This system contains only a single GRUB menu for all boot environments. To enhance reliability and improve the user experience, live upgrade requires you to run a one time conversion script to migrate the system to multiple redundant GRUB menus. This is a one time procedure and you will not be required to run this script on subsequent invocations of Live Upgrade commands. To run this script invoke: /usr/lib/lu/lux86menu_propagate /path/to/new/Solaris/install/image OR /path/to/LiveUpgrade/patch where /path/to/new/Solaris/install/image is an absolute path to the Solaris media or netinstall image from which you installed the Live Upgrade packages and /path/to/LiveUpgrade/patch is an absolute path to the Live Upgrade patch from which this Live Upgrade script was patched into the system. Unable to delete boot environment.
Hacemos los que se nos dice. En /tmp/x tengo montada la imagen ISO del sistema:
[root@tesalia /]# /usr/lib/lu/lux86menu_propagate /tmp/x/ Validating the contents of the media </tmp/x/>. The media is a standard Solaris media. The media contains a Solaris operating system image. The media contains <Solaris> version <10>. Installing latest Live Upgrade package/patch on all BEs Updating Live Upgrade packages on all BEs Successfully updated Live Upgrade packages on all BEs Successfully extracted GRUB from media Extracted GRUB menu from GRUB slice Installing GRUB bootloader to all GRUB based BEs /dev/md/dsk/d0 Skipping parse of <1> in </usr/sbin/metastat> output stage1 written to partition 0 sector 0 (abs 8064) stage2 written to partition 0, 265 sectors starting at 50 (abs 8114) stage1 written to partition 0 sector 0 (abs 32130) stage2 written to partition 0, 265 sectors starting at 50 (abs 32180) /dev/md/dsk/d5 Skipping parse of <1> in </usr/sbin/metastat> output stage1 written to partition 0 sector 0 (abs 8064) stage2 written to partition 0, 265 sectors starting at 50 (abs 8114) System does not have an applicable x86 boot partition install GRUB to all BEs successful Converting root entries to findroot Generated boot signature <BE_Solaris10u5> for BE <Solaris10u5> Converting GRUB menu entry for BE <Solaris10u5> Added findroot entry for BE <Solaris10u5> to GRUB menu Generated boot signature <BE_Solaris10u6> for BE <Solaris10u6> Converting GRUB menu entry for BE <Solaris10u6> Added findroot entry for BE <Solaris10u6> to GRUB menu No more bootadm entries. Deletion of bootadm entries is complete. Changing GRUB menu default setting to <2> Done eliding bootadm entries. File </boot/grub/menu.lst> propagation successful Menu propagation successful File </etc/lu/GRUB_slice> deletion successful Successfully deleted GRUB_slice file File </etc/lu/GRUB_root> deletion successful Successfully deleted GRUB_root file Propagating findroot GRUB for menu conversion. File </etc/lu/installgrub.findroot> propagation successful File </etc/lu/stage1.findroot> propagation successful File </etc/lu/stage2.findroot> propagation successful File </etc/lu/GRUB_capability> propagation successful Deleting stale GRUB loader from all BEs. File </etc/lu/installgrub.latest> deletion successful File </etc/lu/stage1.latest> deletion successful File </etc/lu/stage2.latest> deletion successful Conversion was successful
Una vez hecho este procedimiento, la clonación del BE se realiza con normalidad.
Más información sobre los OpenBadges
Donación BitCoin: 19niBN42ac2pqDQFx6GJZxry2JQSFvwAfS