Tuesday, August 9, 2016

Amazing 9.07, its real bundle, bundled with lots and lots...

Integration with 3PAR Arrays and StoreOnce Backup Systems


As always, there is a large slew of 3PAR and StoreOnce engineering integrations within Data Protector, and this release is no exception – containing no fewer than NINE, including:


VMware incremental GRE, Power-on and Live Migrate from StoreOnce Catalyst: Data Protector users can now power-on, live migrate and perform granular recovery (i.e. recover individual files from within a virtual machine) directly from StoreOnce Catalyst that was previously exclusive to only SmartCache devices and 3PAR Snapshots.


StoreOnce multi-protocol access: Backup data can now be written to a StoreOnce store by either Catalyst over IP (CoIP) or Catalyst over FC (CoFC), giving users the freedom of writing with any of those protocols while reading with another protocol, i.e. write to StoreOnce ’A‘ via CoFC, replicate to StoreOnce ‘B‘ via CoIP, and then read from StoreOnce ‘B’ via CoFC, as below.






3PAR Remote Copy with VMware / Zero Downtime Backup (ZDB): With the introduction of support for 3PAR remote copy, Data Protector customers who are using ZDB with 3PAR can now take advantage of the advanced technology in both solutions allowing them to move at the storage level their VM snapshots to a secondary array at a DR site, where all of the backup operations can be undertaken without any I/O interruption to the production systems on the production array.

3PAR Port Set/Host Set/Volume Set Support: When working with 3PAR, Data Protector 9.07 sees the introduction of support for Host, Port and Volume Sets. The creation of sets of Hosts, Ports and Volumes makes it much easier when working with complex configurations and is in-line with HPE StoreServ 3PAR best practices.

3PAR Peer Persistence: Peer Persistence makes sure that if 3PAR resources change (paths, ports, array controller, etc) that resource presentation towards server hosts is managed properly.

With Data Protector 9.07, we are continuing to deliver on our commitment to simplify management and add more value to customers that use Data Protector with HPE Storage systems


Enhancements for Virtualized Environments


In Data Protector 9.07 we have also introduced new features for Microsoft Hyper-V and VMware environments:


VMware non-CBT is back: Due to popular demand and also to stay compatible with older VMware systems the Change Block Tracking options are back to be set manually per VM or common selected VM‘s


Individual VHD/VHDX support for Microsoft Hyper-V: Currently in Data Protector 9.06 and earlier versions, Hyper-V virtual machines are backed up as one object, however for restore purposes administrators may want to choose individual virtual disks to be restored from a given VM instead of all virtual disks. With Data Protector 9.07, that problem is solved with the introduction of support for individual .VHD and .VHDX virtual hard disks. We also offer the merging of snapshots for a clean environment after a restore by removing unwanted snapshots that were left in the system.


OpenStack Support

Data Protector 9.07 introduces the support for OpenStack Cinder Volume backups via VMware. Data Protector can now make use of various major components in OpenStack:


SWIFT, which is used mainly as backup target. Supported with previous versions of Data Protector as a backup device type.


CINDER, which can be a source device for backups. This is where Cloud VM‘s store their data during runtime.


NOVA is a compute instance where CINDER volumes are mapped to. The Nova Instance is seen as a VM in VMware, whereas the Cinder volume appears as a Shadow VM under the Nova Instance VM.


Data Protectors supports VMware as the hypervisor for OpenStack Cloud VM‘s where users can backup and restore VMs residing in NOVA or Shadow VMs providing improved integration with OpenStack environments.


In a restore situation Data Protector can bring back the VM on VMware as well as the NOVA instance and CINDER volumes attached to it in the cloud. DP can also register these resources into the OpenStack dashboard. We are very excited about this new feature!


PostgreSQL Support

We are adding the support for the backup and recovery of PostgreSQL Enterprise DB databases via the ‘online integration agents’ family, allowing all-encompassing data protection and point-in-time database recovery for all users of PostgreSQL / EnterpriseDB.


This recent addition increases our database backup support to a very comprehensive list including Oracle, MySQL, Microsoft SQL Server, PostgreSQL and more. For detailed information please see the support matrices and integration guides.


NetApp Integration Enhancements

And for the NetApp customers, we are very excited to announce a range of Data Protector integration enhancements, including 3-way NDMP backup and NetApp NDMP Cluster Aware Backup.


A large number of NetApp systems are run in clustered mode which gives the ability for NDMP to fail over to another node or to make use of its resources.


This is why we have introduced an agent to take care of Cluster Aware Backups (CAB), which removes the management overhead from backup/recovery configuration and usage. To use Cluster Aware Backups with NetApp, simply choose ‘NDMP – NetApp CAB’ when adding a new device.


In the example above, a backup is configured for volumes \docs, \pictures and \slides. Volumes are spread over both cluster nodes in this example and the backup device is connected to the right host. Data Protector switches to 3-way backup mode reading the “\slides“ volume from the left node via the right node. In the scenario, Data Protector and Cluster Aware Backup will also take care of the changes (i.e. failover) regarding resources and act appropriately.

Saturday, August 6, 2016

[90:114] Cannot unload medium, target slot (255) appears to be occupied

[Warning] From: BMA@linuxsrv155.com "ESL01_D17"  Time: 2/9/2015 1:13:44 PM
[90:54]   /dev/nst1
Cannot open device (Unit not ready.)

[Major] From: BMA@linuxsrv155.com "ESL01_D17"  Time: 2/9/2015 1:13:55 PM
[90:114]   By: UMA@mediasrv01.com@/dev/rchgr/autoch3
Cannot unload medium, target slot (255) appears to be occupied

Solution:

> The above error is due to DP couldn't unload the media from where it was actually loaded to the drive during backup/restore.

> Move the tape manually from the drive to any free slot using UMA by following the steps below. 

> Login to the media server from where the tape library/autoloader is connected.

>> run devbra-dev, the output will be the drives and exchanger detected from the device manager.

>> copy the scsi path of exchanger (here it is /dev/rchgr/autoch3)
>> execute the below command and move the media to any empty slot.
>uma -ioctl /dev/rchgr/autoch3

/dev/rchgr/autoch3> move D17 S13

> If there are no empty slots available, eject full media to I/O slots and move the media stuck in drive to free slot.
> Then retry the backup, which should be successful.

Hope this helps!

Sunday, July 31, 2016

Orphan object(s) in the backup specification

Warning message from DP

Warning: While reading mount point information, Data Protector found 2 backup object(s), which reference inexistent mountpoint(s) on client system 'winserver123.abc.com'.
Do you want to leave this(these) orphan object(s) in the backup specification?

Reason:

  • Maybe a shared drive that's no longer available on the workstation or were reclaimed.
  • If it exists, it's not accessible by the DP software. In such case the data list needs to be updated/changed on what needs to be backed up.
Solution:
  • Login to the client workstation and check from the Computer Management -> Storage -> Disk Management
  • Check the problematic mountpoint exists and in healthy state. If it doesn't, remove the mountpoint from the backup specification and save it.

Thursday, July 21, 2016

ORA-19588: archived log RECID ***** STAMP ********* is no longer valid

Error log:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on dev_0 channel at 03/08/2014 07:50:43
ORA-19588: archived log RECID 99528 STAMP 841638632 is no longer valid

Recovery Manager complete.
[Major] From: ob2rman@dbserver01.com "stin"  Time: 03/08/14 07:50:56
External utility reported error.

RMAN PID=12391

[Major] From: ob2rman@dbserver01.com "stin"  Time: 03/08/14 07:50:56
The database reported error while performing requested operation.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
 RMAN-03009: failure of backup command on dev_0 channel at 03/08/2014 07:50:43
 ORA-19588: archived log RECID 99528 STAMP 841638632 is no longer valid

Recovery Manager complete.

Cause:


-> Archive log backup fails due to invalid/stale archive files which are left uncleared after the backup completion.
-> It can be when there is two consecutive backups happening. For instance, an archive backup gets triggered while there's active full backup. The overlaps would cause this error.

Solution:
Database admin should run "crosscheck archivelog all" to remove the files that were backed up already.

Hope that helps!

Tuesday, July 19, 2016

Restarting DP Cell Manager

How to restart DP cell manager

When the cell manager hangs or any kind of malfunction requires DP cell manager to be reset the following steps could help! 


1. To check the status of cell manager services
/opt/omni/sbin/omnisv –status

2. Run the following cmd to Stop all DP related services (CRS, RDS and MMD)
/opt/omni/sbin/omnisv –stop

3. Double check whether all the services went down
/opt/omni/sbin/omnisv –status

4. Check if any omni related process is running. If not move to next step
Ps –ef |grep omni    ( Command to check any omni processes are in hung status)
kill -9 <PID>            ( kill the process if any using its Process ID)
Ps –ef |grep omni    ( double check and move to next step)

5. Start the cell manager services now

/opt/omni/sbin/omnisv –start

6. Check all the services are up. If not follow steps 2, 3 & 4

/opt/omni/sbin/omnisv –status

7. Once the services are up, run the below command before any backup is triggered after the restart. This will clear all the in-progress sessions that went to hung status during the services restart.

/opt/omni/sbin/omnidbutil –clear

Please use these steps with caution!