HP 2000 Modular Smart Array Controller Firmware J202P01, J212P01, and J302P01 Release Notes

December 2012



Version: 

  • J202P01 (Fibre Channel)

  • J212P01 (iSCSI)

  • J302P01 (SAS)

Description

This package delivers firmware for HP MSA2000 array controllers. and may include enhanced features or fixes to issues found during use and additional qualification testing.

Note:

Approved companion versions of drive enclosure firmware may also be included in this firmware package.

Supersedes 

  • J202R10 (Fibre Channel)

  • J212R10 (iSCSI)

  • J302R10 (SAS)

Update recommendation: 

Immediate — An issue exists on HP 2000 Modular Smart Array products running firmware versions J200P46, J210P22, or J300P22 that eventually causes controller configuration information to be lost, with subsequent loss of management capability from that controller. Array management, event messaging, and logging cease functioning, but host I/O continues to operate normally. This issue affects the ability to manage the array only from the affected controller; if a partner controller is available, the array can be managed through the partner controller. Because configuration information is stored in non-volatile memory, resetting or powering off the controller does not clear this error. If the issue occurs, the controller must be replaced. This failure mode is time sensitive and HP recommends immediately upgrading firmware on all MSA2000 controllers. This is not a hardware issue and proactive replacement of a controller is not a solution. To avoid this condition, you must upgrade your controller to the latest version of firmware.

Versions of firmware that resolve this issue are in the following table.

ProductCorrected firmwareAffected firmware
MSA 2000 FCJ202P001, J202R10, J201R12, J201R09, J200P50J200P46
MSA2000 iSCSIJ212P01, J212R10, J211R09, J210P23J210P22
MSA2000 SAS J302P01, J302R10, J301R09, J300P23 J300P22

Versioning key: 

AxxxByyy-zz

Where the following letters represent release information about the firmware version:

A

MSA model (TS=P2000 G3, M=MSA2000 G2, J=MSA2000)

xxx

Firmware version. This value changes for major, scheduled releases. Depending on the MSA model, this number may also indicate model protocol.

B

Type of release (R=Regular release, P=Planned update to a regular release, S=Special release)

yyy

Major release number

-zz

Minor release number

Product models

  • HP 2012fc Modular Smart Array (HP MSA2012fc)

  • HP 2212fc Modular Smart Array (HP MSA2212fc)

  • HP 2012i Modular Smart Array (HP MSA2012i)

  • HP 2012sa Modular Smart Array (HP MSA2012sa)

Operating systems

Operating systems supported for use with HP MSA2000 G1 Controllers (and when installing the binary firmware package):

  • Microsoft Windows Server 2008 x64 - All Editions

  • Microsoft Windows Server 2008 W32 - All Editions

  • Microsoft Windows Server 2003 x64 Edition (Including R2 & Base Edition)

  • Microsoft Windows Server 2003 - All Editions (Including R2 & Base Edition)

  • Red Hat Enterprise Linux 5 Server (x86-64)

  • Red Hat Enterprise Linux 5 Server (x86)

  • SUSE LINUX Enterprise Server 10 (AMD64/EM64T)

  • SUSE LINUX Enterprise Server 10 (x86)

  • VMware ESX/ESXi 4.1

  • VMware ESX/ESXi 4.0

  • VMware ESX/ESXi Server 3.5

  • VMware ESX Server 3.0

Operating systems supported for use when installing the Smart Component firmware package:

  • Microsoft Windows Server 2008 x64 - All Editions

  • Microsoft Windows Server 2008 W32 - All Editions

  • Microsoft Windows Server 2008 Itanium

  • Microsoft Windows Server 2003 x64 Edition (Including R2 & Base Edition)

  • Microsoft Windows Server 2003 - All Editions (Including R2 & Base Edition)

  • Microsoft Windows Server 2003 64-Bit Edition (Itanium)

  • Red Hat Enterprise Linux 5 Server (x86-64)

  • Red Hat Enterprise Linux 5 Server (x86)

  • SUSE LINUX Enterprise Server 10 (AMD64/EM64T)

  • SUSE LINUX Enterprise Server 10 (x86)

Fixes and enhancements

The following enhancements and fixes were incorporated in J202P01, J212P01, and J302P01:

  • In a Windows cluster environment, there was a possibility that a scrub, reconstruction, or volume creation could cause a controller to crash.

  • In the Windows Server 2008 R2 environment, cluster validation failed during the cluster creation.

  • iSCSI IQN/host mapping failed when uppercase characters were used in the IQN string.

  • Enclosure ID numbers were not updated when an additional drive enclosure was added to an array running on a single controller.

  • In the SMU, the enclosure status displayed a false red alert status following a firmware update when the status was actually OK.

  • In the CLI, for the set advanced-settings command, the single-controller on parameter was added to set Single Controller redundancy mode for a single installed controller.

The following enhancements and fixes were incorporated in J202R10, J212R10, and J302R10:

  • Scrub caused controllers to halt.

  • Identical vdisks created through the CLI and SMU report different volume sizes.

  • Power Supply and I/O module statuses were reported differently on Controller A and Controller B.

  • Controller halted when a vdisk expansion started.

  • In dual-controller configurations, if one controller halted, the Fibre Channel host links did not failover to the surviving controller.

  • Medium errors on drives in a RAID 6 vdisk caused another vdisk to report a critical state.

  • The controller halted when clearing metadata of a leftover disk.

  • Due to loss of heartbeat between the two controllers, one of the controllers halted.

  • The event log was not updated when a drive was removed (or marked as “down”) while a utility, such as verify, was running.

  • RAID 6 reconstruct caused partner controller to halt.

  • Volumes became unaccessible when converting a master volume to a standard volume.

  • Drives in non fault-tolerant vdisks did not report unrecovered media error as a warning.

  • Heavy RSR load caused a controller to halt.

  • Added “year” to the critical error log.

  • Controller halted when utility timing was in conflict.

  • Controller halted when, under heavy I/O loads, a RAID 6 vdisk had one or more failed drives.

  • There were verification errors after an internal error recovery process completed.

  • After a failover, an incorrect vdisk utility status was reported in the CLI and SMU.

  • Host lost access when a large vdisk was being rebuilt.

  • The spare drive was not activated when the vdisk passed into a critical state.

  • Updated scrub utility for improved behavior.

  • RAID 6 reconstruct reported incorrectly when an additional failure occurred.

  • Drive LED behavior was inconsistent.

  • A controller halted due to excessive retries when a drive that was being reconstructed to had a failure.

  • Enhanced the Power Supply module Voltage/Fan Fault/Service Required (bottom) LED. It illuminates solid amber during an under-voltage condition and will now remain illuminated even after the current returns to normal and the power supply is replaced or power cycled.

  • A controller halted during an update of an expansion controller.

  • Both controllers halted during a failover.

  • Chassis failure caused data access problems and/or data loss.

  • Multiple MSA systems presented the same WWPN.

  • Improved logging with better historical information.

  • Scrub log entries did not properly display the parity error count after a failover event.

  • In both the SMU and CLI, metadata was not cleared from all of the selected drives when commanded to do so.

  • The controller stalled when a vdisk with snapshot and replication volumes was deleted.

  • A duplicate vdisk was reported after a halted controller recovered.

  • LUNs were not properly re-mapped after changing vdisk ownership.

  • The wrong drive was marked as “down”.

  • Improved management of cache flushing.

  • Added ability to allow flow control on iSCSI ports.

  • Removed unused debug agent component.

  • RAID data logs were not flushed after an extended power off or when a controller was restarted but failover did not occur.

  • RAID 50 error reporting did not report errors during verify.

  • Improved performance on RAID 10 vdisks.

  • When a controller was removed and I/O was in process, the transaction was held in the cache.

  • Scrubbing message was unclear when vdisk is owned by the other controller.

  • After extensive run time and I/Os, the system may stall during a shutdown procedure.

  • Background vdisk scrub stopped with no warning.

  • If a controller was restarting or failing over at the same time that a snapshot was being deleted, there was a possibility of the snapshot becoming inconsistent.

  • Disk loss during an expansion controller upgrade.

  • A power supply module failure event was not included in the system event logs.

  • Vdisk scrub failed ungracefully on disk URE.

  • Reduced the I/O delay when a has failed and data needs to be reconstructed by the RAID engine.

  • There was a disk channel error when using SATA drives.

  • The event log did not properly report when both controllers were restarted.

  • LEDs of all drives in enclosure 1 were amber and SMI-S reported them as failed.

  • Management controller hung after recovery.

  • Disconnecting a back end cable caused a controller to halt.

  • Volume was not accessible after converting it from a Master volume to a Standard volume.

  • An unknown setting was reported set on one disk in a system.

  • False under-run errors were written to the event logs.

  • A controller halt reported Double IOB to same Nexus in the event logs.

  • The power supply module was incorrectly identified in the event logs.

  • Incorrectly reported the components were in a degraded state.

  • Could not collect a complete set of logs from an array.

CLI-specific fixes and enhancements incorporated into J202R10, J212R10, and J302R10:

  • trust command: Updated CLI help example.

  • trust vdisk command: When run on a vdisk that was online, it reported success when it should have reported failure.

  • set vdisk command: When changing the vdisk name, the name was rejected as being invalid.

  • set debug log parameters command: command returned an error message and would not perform the requested action.

  • expand snap-pool size max command and variable: returned an error message.

  • clear events command: Improved online help.

  • set host-wwn-name command: Setting the host-wwn-name did not work as expected.

  • set iscsi-host host <host> <new-nickname> command: Was unable to enter an IQN alias name.

  • show host-wwm-names: Did not work as expected.

SMU-specific fixes and enhancements incorporated into J202R10, J212R10, and J302R10:

  • When a dedicated spare of a vdisk was deleted, the drive was marked as “leftover”.

  • HP SMU incorrectly assigned reads as RAIDar, resulting in a restart.

Firmware update-specific fixes and enhancements incorporated into J202R10, J212R10, and J302R10:

  • All drives in enclosures 1 and 3 were reported as unknown following a firmware update.

  • Partner Firmware Update (PFU) did not properly update the firmware on Controller B.

  • After a failover, vdisk ownership did not change to the operating controller.

  • After performing a firmware upgrade, some drives were errantly reported as duplicate/leftover drives.

  • After a firmware update, multiple drives are marked as “leftover”.

  • After upgrading firmware, the array had to be restarted.

  • After a firmware upgrade, the controller stalled.

  • A firmware upgrade failed.

Installation instructions

Installation notes and best practices

Warning!

Do not power cycle or restart devices during a firmware update. If the update is interrupted or there is a power failure, the module could become inoperative. If this occurs, contact technical support. The module may need to be returned to the factory for reprogramming.

Caution:

Before upgrading firmware, ensure that the system is stable and is not being reconfigured or changed in any way. If changes are in progress, monitor them and wait until they are completed before proceeding with the upgrade.

  • Before installing this firmware:

    • If updating using a Smart Component, ensure that FTP and telnet are enabled on the arrays being updated.

    • Create a full backup of system data. (Strongly recommended.)

    • Schedule an appropriate time to install the firmware:

      • For single domain systems, I/O must be halted.

      • For dual domain systems, because the online firmware upgrade is performed while host I/Os are being processed, I/O load can impact the upgrade process. Select a period of low I/O activity to ensure the upgrade completes as quickly as possible and avoid disruptions to hosts and applications due to timeouts.

    • Allocate sufficient time for the update:

      • In single domain systems, approximately 30–60 minutes are required for the firmware to load, plus an additional 15–30 minutes for the system to automatically restart.

      • In dual domain systems, an additional 30–60 minutes is required for the second update, plus an additional 15–30 minutes for the second module to automatically restart.

    • Set the Partner Firmware Update option so that, in dual-controller systems, both controllers are updated. (For SMU and FTP updates only; Smart Components automatically enable/disable the PFU settings as needed.) When the Partner Firmware Update option is enabled, after the installation process completes and restarts the first controller, the system automatically installs the firmware and restarts the second controller. If Partner Firmware Update is disabled, after updating software on one controller, you must manually update the second controller.

  • During the installation process:

    • Monitor the system display to determine update status and know when the update is complete.

  • After the installation process is complete and all systems have automatically restarted:

    • Verify system status in the system's management utility and confirm that the new firmware version is listed as installed.

    • Review system event logs.

    • Updating array controller firmware may result in new event messages that are not described in earlier versions of documentation. For comprehensive event message documentation, see the most current version of the HP 2000 Modular Smart Array Reference Guide.

    • The Smart Component update process logs messages to \CPQSYSTEM\Log\cpqsetup.log on the system drive in Windows and /var/cpq/Component.log in Linux.

  • When reverting to a previous version of firmware, note the following:

    • Ensure that both Ethernet connections are accessible before downgrading the firmware.

    • When using a Smart Component firmware package, the process automatically disables Partner Firmware Update (PFU) and then downgrade the firmware on each controller separately (one after the other) through the Ethernet ports.

    • When using a Binary firmware package, you must manually disable the Partner Firmware Update (PFU) and then downgrade the firmware on each controller separately (one after the other).

Installing firmware using Smart Components—Windows environments

This is a self-extracting executable module. You can execute this module from the Windows graphical user interface (GUI) or the command line interface (CLI).

GUI update method

  1. Obtain the firmware package and save it to a temporary directory. Firmware for all HP products is available from the HP Business Support Center website at http://www.hp.com/support/downloads.

  2. Using Windows Explorer, navigate to the directory containing the download.

  3. Double-click the executable file.

  4. Follow the onscreen instructions.

    When prompted for logon information, enter credentials for an account with management access rights.

CLI update method

Execute the Smart Component by entering the following command:

CPxxxxxxx.exe /target <ip_address> /user <username> /passwd <password> /s

where

CPxxxxxxx.exe

is the downloaded Smart Component filename

ip_address

is the management IP address of the array controller

username

is the username account with management rights

password

is the password for username

When prompted for logon information, enter credentials for an account with management access rights.

Note:

Instead of command line parameters, you can use the following DOS environment variables:

  • oa_address: Set this variable for the IP address of array controller.

  • oa_username: Set this variable for the username of array controller.

  • oa_password: Set this variable for the password of array controller.

Installing firmware using Smart Components—Linux environments

  1. Obtain the firmware package and save it to a temporary directory. Firmware for all HP products is available from the HP Business Support Center website at http://www.hp.com/support/downloads.

  2. Open a Linux command console.

  3. From the directory containing the downloaded file, enable execute access to this model by entering:

    chmod +x CPxxxxxx.scexe

    where CPxxxxxx.scexe represents the downloaded Smart Component filename.

  4. Execute the Smart Component by entering a command similar to the following:

    ./CPxxxxxx.scexe -e --target <ip_address> --user <manage_username> --passwd <manage_password>

    Note:

    • Use the -e or -f option when flashing a device, even if it is up to date.

    • Use the -g option when downgrading.

    • Use the -h option to see online help for the command.

    • If the username or password contains an exclamation mark (!), enclose the string in single quotes or enter a backslash (\) before the exclamation point. For example” '!manage' or \!manage

  5. Follow onscreen instructions.

    When prompted for logon information, enter credentials for an account with management access rights.

Installing firmware using the Storage Management Utility (SMU)

  1. Obtain the firmware package and save it to a temporary directory. Firmware for all HP products is available from the HP Business Support Center website at http://www.hp.com/support/downloads.

  2. If using a Smart Component, extract the contents of the Smart Component using one of the following methods:

    • In Windows—Click Extract on the first screen of the Smart Component.

    • In Linux—Enter a command using the following syntax:

      ./CPxxxxxx.scexe --unpack=<folder name>

      where

      ./CPxxxxxx.scexe

      represents the Smart Component filename

      <folder name>

      represents the filename of the destination folder for the extracted binary file

  3. Locate the firmware file in the downloaded/extracted folder. The firmware filename is in the following format: neptunesw-JxxxPyy-zz.bin

  4. In single-domain environments, stop all I/O to vdisks before starting the firmware update.

  5. Log in to the SMU and select Manage > Update Software > Controller Software.

    A table displays currently installed firmware versions.

  6. Click Browse, and then select the firmware file to install.

  7. Click Load Software Package File.

    Allow approximately 30–60 minutes for the firmware to load, plus an additional 15–30 minutes for the automatic restart to complete on the controller you are connected to. Wait for the progress messages to specify that the update has completed.

    In dual-controller systems with Partner Firmware Update enabled, allow an additional 30–60 minutes for the second update, plus an additional 15–30 minutes for the second module to automatically restart.

  8. In the SMU display, verify that the expected firmware version is installed on each module.

Installing firmware using FTP

  1. Obtain the firmware package and save it to a temporary directory. Firmware for all HP products is available from the HP Business Support Center website at http://www.hp.com/support/downloads.

  2. If using a Smart Component, extract the contents of the Smart Component using one of the following methods:

    • In Windows—Click Extract on the first screen of the Smart Component.

    • In Linux—Enter a command using the following syntax:

      ./CPxxxxxx.scexe --unpack=<folder name>

      where

      ./CPxxxxxx.scexe

      represents the Smart Component filename

      <folder name>

      represents the filename of the destination folder for the extracted binary file

  3. Locate the firmware file in the downloaded/extracted folder. The firmware file name is in the following format: neptunesw-JxxxPyy-zz.bin

  4. Using the SMU, prepare to use FTP:

    1. Determine the network-port IP addresses of the system controllers.

    2. Verify that the system FTP service is enabled.

    3. Verify that the user you log in as has permission to use the FTP interface and has manage access rights.

  5. In single-domain environments, stop I/O to vdisks before starting the firmware update.

  6. Open a command prompt (Windows) or a terminal window (UNIX), and navigate to the directory containing the firmware file to load.

    1. Enter a command using the following syntax:

      ftp <controller-network-address>. (For example: ftp 10.1.0.9)

    2. Log in as an FTP user (user = ftp, password = flash).

    3. Enter a command using the following syntax:

      put <firmware-file> flash

      where <firmware-file> represents the binary firmware filename.

    Allow approximately 30–60 minutes for the firmware to load, plus an additional 15–30 minutes for the automatic restart to complete on the controller you are connected to. Wait for the progress messages to specify that the update has completed.

    In dual-controller systems with Partner Firmware Update enabled, allow an additional 30–60 minutes for the second update, plus an additional 15–30 minutes for the second module to automatically restart.

  7. If needed, repeat these steps to load the firmware on additional modules.

  8. Quit the FTP session.

  9. In the SMU (or CLI) display, verify that the proper firmware version is displayed for each module.

Installation troubleshooting

If you experience issues during the installation process, do the following:

  1. When viewing system version information in the SMU System Overview panel, if an hour has elapsed and the components do not show that they were updated to the new firmware version, refresh the web browser. If version information is still incorrect, proceed to the next troubleshooting step.

  2. If version information does not show that the new firmware has been installed, even after refreshing the browser, restart all system controllers. For example, in the CLI, enter the restart mc both command. After the controllers have restarted, one of three things happens:

    • Updated system version information is displayed and the new firmware version shows that it was installed.

    • The Partner Firmware Update process automatically begins and installs the firmware on the second controller. When complete, the versions should be correct.

    • System version information is still incorrect. If system version information is still incorrect, proceed to the next troubleshooting step.

  3. Verify that all system controllers are operating properly. For example, in the CLI, enter the show disks command and read the display to confirm that the information displayed is correct.

    • If the show disks command fails to display the disks correctly, communications within the controller have failed. To reestablish communication, cycle power on the system and repeat the show disks command. (Do not restart the controllers; cycle power on the controller enclosure.)

    • If the show disks command from all controllers is successful, perform the firmware update process again.

Known issues and workarounds

This is a cumulative list of known issues and workarounds since the initial firmware release.

  • How to get out of failure mode:

    1. Pull host cables.

    2. Power cycle raid-head enclosure.

    3. After reboot, wait for disk lights to stop flashing as this indicates de-stage is complete.

    4. Plug host cables back in and reconfigure the host to do I/O.

  • SSH access to the MSA2000 CLI may fail on repetitive attempts to open and close the connection.

    When using telnet and secure shell (SSH) to access the command line interface (CLI), the connection may fail when multiple sequences or commands are sent from a script. The issue does not occur if a delay, for example 0.25 to 1 second, is inserted between the ssh close and the subsequent ssh open commands in the script.

  • An initializing vdisk is accessible immediately, but is not fault tolerant.

    The "Virtual Disk Initialization" section in the HP 2000 Family Modular Smart Array Reference Guide states: "If the virtual disk is initializing online, you can start using it immediately," which may be misleading. As shown in the "Virtual Disk Icons" section in the guide, the vdisk is NOT fault tolerant while the vdisk is initializing or in a critical state.

  • MPIO reporting path fail-over to single LUN on Windows Server 2008 host.

    A Windows Server 2008 host may occasionally lose a single path to a single LUN. The Windows 2008 MPIO reports a path fail-over, however, the path may not come back. This issue is fixed with Microsoft QFE KB957316, which is available from the Microsoft support website at http://support.microsoft.com/kb/957316. Review the information and download the appropriate QFE for the Windows Server 2008 operating system. If Microsoft QFE KB957316 is not installed, the system must be rebooted to correct the issue.

  • A failed drive may not be displayed on the Enclosure View page in the SMU.

    Should this condition occur, check the drive LED indicator for a solid amber light indicating a failed drive. If the drive was configured for SMART detection, check for entries in the array system event log. Note: The failed drive status displays as Missing using the show disks encl command from the command line interface (CLI).

  • Removing all the drives from a JBOD enclosure causes the enclosure to be removed from the SMU Enclosure View page.

    To check the status of the enclosure, access the array using one of the command line interface (CLI) methods and run the show enclosures command.

  • Windows Server 2003 SP2 fails to hibernate causing the system to lock up.

    Applying Microsoft hot fix KB940467 corrects the issue for Windows Server 2003 SP2.

    To obtain KB940467 from the HP website:

    1. Go to http://www.hp.com/support/storage.

    2. In the Disk Storage Systems section, select MSA Disk Arrays.

    3. Select your product.

    4. Select Download drivers and software.

    5. Select your product.

    6. Select your Windows Server 2003 operating system.

    7. In the Operating System — Enhancements and QFEs section, select the KB940467 QFE.

  • I/O may not resume to SUSE and Red Hat Linux hosts upon cable reinsertion.

    The likelihood of this issue occurring increases with the number of LUNs configured on the storage array and load. The failover/failback process is working correctly at the multipath driver level. At the multipath application level, multipath maps are not getting updated. To update the maps at application level, run the command multipath -v0. This command may take a few minutes with heavy I/O running on the system.

  • Windows Server 2003 host may hang after failure of a vdisk.

    The start menu bar goes away, applications may become slow to close, and the system does not reboot (shuts down all programs and closes network connections but hangs at gray screen with mouse cursor still active). Microsoft is working on a QFE hotfix. Power cycling the host corrects the issue.

  • Installing the latest driver in RHEL5.2 with HP Device Mapper (HPDM) and Boot from Storage configuration may stop the OS from booting.

    After installing the 4.00.13.04-2 driver in a boot from storage configuration with HPDM enabled, the RHEL 5.2 host may no longer boot into the operating system. HP is working on the issue. Changing to the previous driver (for example, older INITRD) allows the host to boot from storage.

  • After a cable move, LSI Logic MPT SAS BIOS loads incorrectly, showing initialization twice on the server.

    After moving SAS cables to a different SAS host bus adapter (HBA) on the server, the MPT SAS BIOS may incorrectly load and initialize SAS HBA cards, after which the MSA2012sa stops accepting I/O requests from the server. Use the SMU to reset the host port interface on both MSA2012sa storage controllers. Log into the SMU and navigate to the Manage > Utilities > host utilities > reset host channel page and click the Reset Host Port button to initiate the action. Although the web page returns immediately with a response, it may take up to 1 minute for the MSA2012sa storage controller to process the request and make the host ports ready for initialization by the SAS HBA card. A reboot of the server is not required.

  • An unanticipated path change may occur on Red Hat Linux 4 Update 6 hosts using HPDM Multipath software.

    In a multi-path configuration using HPDM software during periods of heavy I/O load, Linux Red Hat 4.6 hosts may experience unanticipated path change due to a SCSI I/O timeout. This issue does not occur for single path configurations. To reduce the likelihood of occurrence, ensure the storage array is properly configured and has a balanced I/O load.

  • Drives may not be seen by the LSI Logic MPT SAS BIOS when more than one SAS HBA card is installed on some servers.

    After making some configuration changes, the MPT SAS BIOS may not list all available drives for boot during startup. This message may appear during startup: Adapter configuration may have changed, reconfiguration is suggested. The MPT SAS BIOS setup utility, accessed by pressing F8 during boot-up and selecting the SAS Configuration Tool, can be used to add the HBA back to the boot list. A reboot of the server is required.

  • A newly created snapshot volume on a single controller array presented and mapped to a Windows host may not be detected by the Windows operating system.

    Use the Rescan Disks command in Windows Disk Management to force detection of the newly created snap shot volume. Rescanning disks can take several minutes, depending on the number of hardware devices installed.

  • The SMU may hang after drive failures.

    If this condition occurs, access the array using the command line interface (CLI) and restart the management controller (MC) using the command: restart mc a (single controller) or restart mc both (dual controller).

  • Issuing the CLI command set cache-parameters read-ahead maximum does not change the setting although the message indicates success.

    Some cases, where the number of volumes is large or for some reason the available read cache is small, may result in the setting of "maximum" to display a smaller value than expected. The maximum read-ahead cache is calculated by dividing the available read cache by the number of volumes presented.

  • An NTP server IP address change on one controller does not propagate to the other controller.

    Disable NTP on both controllers, set the IP address of the intended NTP server, and re-enable NTP.

  • When a controller is powered off or in a failed state, the Link LED remains ON although the host indicates a link down state.

    Disregard the Link LED when the controller removal LED is illuminated.

  • On the Storage Management Unit (SMU) Manage > Scheduler page, the Snapshot Prefix field allows up to 31 characters to be input but then displays an error message indicating the prefix can have 1 to 14 characters.

    Only type 1 to 14 characters in the Snapshot Prefix field.

  • On Linux, a scan may be required to detect a newly presented Volume/LUN.

    A Volume/LUN newly presented to a Linux host may not be detected by the Linux operating system. This is a Linux issue, not an array issue. A Linux scan can be used to force detection of the newly presented Volume/LUN.

    The scan command syntax is:

    echo “<Channel> <Target identifier> <LUN>“ > /sys/class/scsi_host/host<Host number>/scan

    Where <Channel>, <Target identifier>, and <LUN> can have a - wild card; <Host number> can be 0, 1, 2, or 3.

    Examples:

    • echo “- - –” > /sys/class/scsi_host/host1/scan # scan all Channels, all Targets, and all LUNs of Host 1.

    • echo “0 - –” > /sys/class/scsi_host/host1/scan # scan Channel 0, all Targets, and all LUNs of Host 1.

    • echo “0 0 –” > /sys/class/scsi_host/host1/scan # scan Channel 0, Target 0, and all LUNs of Host 1.

    • echo “0 0 0” > /sys/class/scsi_host/host1/scan # scan Channel 0, Target 0, and LUN 0 of Host 1.

    Note:

    For multipath/dual HBA configurations connected to the same array, run the same command for both HBAs. For example:

    echo “- - –” > /sys/class/scsi_host/host1/scan

    echo “- - –” > /sys/class/scsi_host/host2/scan

    For HPDM Multipath configuration, after running the scan command, run the following commands to update the multipath maps in the kernel:

    /etc/init.d/multipathd restart

    /sbin/multipath —v3

  • There is an issue with DMS where I/O can time out.

    • The following conditions need to be true in order for this to occur:

      • DMS is enabled.

      • Snap Pool (aka Backing Store) and Master Volume are on the same vdisk.

      • Host IOs are running. Based on what is observed during re-creating or failing to re-create, it may only be brought out with heavy host IOs. Since there are not enough runs to make it statistically significant, treat heavy IOs as an observation.

      • There is a failover operation (as in a controller failure).

  • Follow these steps if, on controller shutdowns or code updates, the surviving controller crashes:

    1. Unplug all host interfaces from both controllers.

    2. Reboot the crashed controller and perform the firmware update procedure again if applicable.

    3. Once the code has been updated, or both controllers are now operational, plug the host cables back in.

    4. Bring the storage array back online to the host(s).

Effective date

December 2012