Tag Archives: Simpana

Simpana 10 – Commserve DR Backup

A demo I put together that shows how to perform a Commserve DR Backup in Simpana 10. It talks about the process of how to do it manually and where you can find the files associated with the process. Sometimes this might need to be performed manually and collected manually so that you can upload it via some manual process due to lack of connectivity in an environment to the outside world. i.e. internet.

It’s a follow on from a previous post that talks about the same process in Simpana 9. Which you can find here.

Hope you enjoy the demo. If this post helps you, please leave a comment.

Simpana 10 – advanced client properties – firewall – outgoing routes UI change

Noticed that the Simpana 10 Advanced client properties – firewall – outgoing routes user interface changed a little between SP5a and SP6. It may have happened on SP5b, but I couldn’t test at the time, but saw the difference certainly between our jump from SP5a to SP6.

Check out the pictures below to see what I mean. Looks like they relabelled some items.

Advanced Client Properties - Outgoing routes - SP5a

Advanced Client Properties – Outgoing routes – SP5a

Advanced Client Properties - Outgoing routes - SP6

Advanced Client Properties – Outgoing routes – SP6

Simpana 10 – PostgreSQL 8.4 backup on CentOS Linux 5.10 x64 – example

I am going to assume that this is a test deployment and as such will expect that you have installed your CentOS 5.10 x64 Linux the way you want it, and I will follow on from that point on what I needed to perform to get the distribution release of PostgreSQL to work with Simpana 10 PostgreSQL iDA to perform a backup. Of course some assumed knowledge present.

  1. Install the postgresql packages onto your  CentOS client.
    $ sudo yum install postgresql84 postgresql-server postgresql-devel
  2. Startup postgresql server for the first time, you need to run initdb switch instead of start for the first time only.
  3. $ sudo service postgresql initdb
  4. We should also enable the service to run at boot moving forward
    $ sudo chkconfig postgresql on
  5. Before we change the authentication method below, we need to set a password that we know for the postgres user in the postgresql database. To perform this we need to change to the postgres user and connect to postgresql database and update the password for the user to something we know.
    $ sudo su –
    # su – postgres
    $ psql
  6. Now at the postgres prompt update the password for the postgres user, unless you want to make your own. Won’t discuss how, just going to show how to set postgresql user password. Be sure to remember what you set the password too, it will be required later on.
    postgres=# ALTER USER postgres WITH PASSWORD ‘password';
    ALTER ROLE
    postgres=#q
  7. Postgresql packages distributed with CentOS don’t use md5 password authentication, it defaults to peer/ident based authentication. In this example we will flip this to md5 based authentication, and we will touch on a peer/ident based authentication example in a later post. Perform the changes below to enable md5 authentication.
    $ cd /var/lib/pgsql/data
    $ sudo vi pg_hba.conf
    Find the line at the bottom of the file that looks like the one below;
    local     all     all                ident
    You need to change this to have md5 on the end, i.e. replace ident to be md5 instead. Save the changes.
  8. Now restart postgresql for the changes to take effect. (required)
    $ sudo service postgresql stop
    $ sudo service postgresql start
  9. Now you can test that this has worked by execution as root the command below, and when prompted for the postgres user password authenticate using the password set in step 6.
    # psql -U postgres
    If it worked, you will get the famous postgres=# prompt, in which you can enter q [enter] to quit it.
  10. Next up we now need to enable archive logs. We need to edit the postgres.conf file which on CentOS rpm based install is /var/lib/pgsql/data and the lines we need to add in the Archiving section is below;
    archive_mode = on
    archive_command = ‘cp %p /var/postgresql/archive/%f’
    Save those additions and move on below.
  11. Make sure to create the folders/destination used in the archive_command above and ensure postgres user can write to it etc.
  12. Now restart postgresql for the changes to take effect. (required)
    $ sudo service postgresql stop
    $ sudo service postgresql start
  13. Install the Simpana PostgreSQL iDA.
  14. Once installed refresh the Simpana Console and attempt to create your PostgreSQL instance. See the dialog below for the values I used in this configuration. Of course the username is the postgres user and password we configured in step 6. Note the archive log directory is the one we used in the archive_command string at step 10 too.
    simpana_10-centos-5.10_x64-postgresql_instance_creation
  15. If everything goes to plan you should have your instance created and now you can do configuration against the DumpBaseBackupSet subclient and/or FSBasedBackupSet subclient. For the difference between what each does, I recommend you review the documentation. As each backupset has its own unique capabilities. See the bottom of the Backup documentation page for explanations.
  16. Assign a Storage Policy to each subclient and run a backup of each to confirm it works.

CommVault Documentation references:

Simpana 10 – Specifying the media parameters for RMAN command line operations – Example

An recent addition to Simpana 10 Oracle iDA over Simpana 9 was the ability to specify Media Parameters for RMAN Command Line Operations, which wasn’t possible in Simpana 9.

Below is an example on its use, and the documentation links from Commvault are 1, 2 an 3.

The client in this example is “jwora1″ running Windows 2008 R2 x64 and an Oracle 11gR2 64bit release. Simpana 10 with a SP4 is installed on client and Commserve – “jwcs”.

RMAN Script:

run {
allocate channel ch1 type 'sbt_tape' PARMS="BLKSIZE=262144,ENV=(CVOraSbtParams=C:p.txt,CvClientName=jwora1,CvInstanceName=Instance001)" trace 2;
backup current controlfile;
}

Contents of p.txt file below;

[sp]
SP_Main-jwma1

[mediaagent]
jwma1

Below is a look at the GUI configuration for the Oracle instance “orcl” on client “jwora1″ which shows that third party command line backups should use Storage Policy (SP) – “SP_Main-jwcs”. However as you will not by the running of the job using the Media Parameters it will use a different SP and MediaAgent, as defined by the p.txt file I passed.

subclient not configured with any SP

subclient not configured with any SP

orcl properties showing command line backup should use SP - SP_Main-jwcs by default.

orcl properties showing command line backup should use SP – SP_Main-jwcs by default.

orcl properties showing log backups would use SP - SP_Main_jwcs by default

orcl properties showing log backups would use SP – SP_Main_jwcs by default.

sample execution of my rman backup script - current control file backup

sample execution of my rman backup script – current control file backup

Commserve Job Controller showing the running job. Note which MediaAgent is used and SP.

Commserve Job Controller showing the running job. Note which MediaAgent is used and SP.

If you find my posts of value, please send me some feedback. Especially if you find this post and it helps you in your travels.

UPDATE: And to follow on from the example above, the following is also possible too. If you don’t pass the CvClientName and CvInstanceName on the channel allocation, you can pull those too from the parameters file. Sample below of alternative backup script syntax and parameters file contents. All documented on the documentation link provided top of post.

RMAN Script:

run {
allocate channel ch1 type 'sbt_tape' PARMS="BLKSIZE=262144,ENV=(CVOraSbtParams=C:p2.txt)" trace 2;
backup current controlfile;
}

Contents of p2.txt file:

[sp]
SP_Main-jwma1
[mediaagent]
jwma1
[CvClientName]
jwora1
[CvInstanceName]
Instance001

The parameter file can have spaces between the definitions like in the top example, which I prefer, as it makes the file easier to read. Where as the p2.txt file has no extra spaces, which also works but makes it harder to read personally.

Enjoy.

DB2 Archive log backup failing

Had a client/server who was unable to do any DB2 Archive log backup. Attempts to run it would fail and when I looked into the logs the failure was seen against the key log lines below. I’ve named the log file and component where this log exists.

CommserveJobManager.log

2100 2944 02/16 11:40:11 111111 Servant [---- SCHEDULED BACKUP REQUEST ----], taskid [15821] Clnt[aaaaa003] AppType[DB2 on Unix][62] BkpSet[AA1] SubClnt[AA1_Archive_logs] BkpLevel[Full][1]

CommserveSrvDB2Agent.log

13720 26b4 02/16 11:40:16 111111 SrvDb2Agent::AgentAttach() - 1: dataAgentAttach-> HostName=aaaaa003.bbbbb.ccc*aaaaa003*8400*8402 start...
13720 26b4 02/16 11:40:16 111111 ** CVSession::attach(ulPortArg):
	- RemoteHost=aaaaa003.bbbbb.ccc.
	- RemoteProcess=todo.
	- AuthenticateClient failed. Error=9000026.

13720 26b4 02/16 11:40:16 111111 SrvDb2Agent::AgentAttach() - 0: dataAgentAttach() failed: m_cvsOA.getLastError: Err Number=9000026 sLastErr=[CVSession::authenticateClient]:Remote system [aaaaa003.bbbbb.ccc]. Failed authentication returned from server..
13720 26b4 02/16 11:40:16 111111 SrvDb2Agent::RunClient() - 0: AgentAttach() failed.
13720 26b4 02/16 11:40:16 111111 SrvDb2Agent::ExitHere() - 1: JobObjectInitialize(111111) started...

Failed authentication messages could indicate that like the message says, the network password used by the Commserve and Client to communicate is different, thus the communication fails. i.e. Client could have a network password stored on it’s configuration that doesn’t match the one for this client stored in the Commserve database.

OR

Something might be wrong with the communication between the Commserve and the Client (in either direction).

So I would recommend checking out that the communication is fine and that the Commserve is resolving the client correctly and accessing the correct IP address for the client.

If this all checks out fine, and communication from Client to CS is fine (in both directions). You might have to force a network password reset/sync. The easiest way I find to do this is to do a LOCAL uninstall on the problem client (to ensure that the backup history for client is retained in the Commserve) and perform a reinstall on the client using the Simpana DVD3 media. Ensuring to install again using the same details the client is displayed as in the Commserve. During the reinstall the network password is put in sync again.

In the case of the client here, we had to reinstall it. Post the reinstall the client and backup attempts worked fine.  This would affect all Simpana iDA’s on the client too, thus impact all backups for the client.

Cache Database is failing at client side

Recently came across the following failure condition against a Unix client during backups via Commvault Simpana. Not something I have seen recently however, as you will see from below, the reason for this is quite simple once you understand why.

Exact error reported in Simpana Console for the job;

Error Code: [10:15]
Description: [Cache Database is failing at client side.]

Picture below shows the error too;

job-failure-cache-database-is-failing-at-client-side

Check the client’s properties and confirm if it has the following options enabled. Per the screen capture below, we want to confirm under the Client Side Deduplication tab if the “Enable Client Side Disk Cache” tick box is ticked.

client-properties-client-side-deduplication

If this option is ticked, as shown above, it will use a max cache-database size of 4096MB (4 Gigabytes), which is created in the same location as the client jobResults folder.

Guess what happens if your client doesn’t have enough disk space for this cache-database? If your guessing the error we got during this backup attempt? You’d be absolutely correct.

This client in question only had about 2G of diskspace available per the output below;

hostname:/opt/simpana/Log_Files]bdf /opt
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    20971520 18749152 2205632   89% /

Which meant backup attempts against this client when this option is enabled and set to 4096MB max cache-database size caused the error per the start of the post.

Solution here, increase space on the volume where jobResults is hosted, turn off Client Side Disk Cache, reduce Client Side Disk Cache, or move jobResults to a volume with more space. You just need to pick the option that suits you.

Simpana 10 – Clearing simpana lock file on Unix clients

Just wanted to put together a clip that covers this condition which can be seen on Unix clients for Simpana 9 and Simpana 10. The steps to resolve it on both versions are exactly the same.

This condition is typically seen when Simpana services do not get shutdown cleanly. Maybe the system never ran all the init scripts during shutdown including ours, or the system wasn’t shutdown cleanly.

What will happen in that condition is that on the next simpana services start you will get an error as shown below;

**** There is already another simpana running.

However if you run a “simpana list” you’ll notice that no simpana services are running despite the failure to start up and the error indicating simpana is already running.

Our software during start up will create a lock file, which is referenced and checked at start up. If it exists you will get the message. During shutdown its removed.

Best way to find out the name of the lock file is to take part of the error message and grep for it in the simpana script. Make sure your in the correct path when you grep for it. i.e. default install location is /opt/simpana/Base

cat simpana | grep "There is already another simpana running"

From the output shown you will see the full path to the file and the filename we test exists. Please watch the video clip below for a complete walk through.