Transport Sync – The hardware and the software

Since getting my Transport Sync and seeing reports that the drive when formatted by the unit may not be accessible by any other means. I thought I would investigate this further myself.

I took the external drive I was using and attached it to another Linux machine. As I expected when I did this I could see 3 partitions had been created on my external drive. The first was a 1GB (primary partition 1), followed by ~1.8G (logical partition 3 ) and the rest of the drive space (logical partition 4).

I’d heard that the drive my not be mountable outside of the unit/use. However, that’s not true. I could mount all the partitions and traverse the contents without an issue. However, the layout of the disk is not what you’d expect. See below, as your share name s in the web portal are linked to what looks to be a UUID type relationship.

# ls -la
total 48
drwxrwx---    7 embedded embedded      4096 Mar  9 21:36 .
drwxr-xr-x   18 embedded embedded      4096 Mar  9 20:20 ..
drwxrwx---    5 embedded embedded      4096 Mar  9 21:00 54f6e0afbe034963343d3082
drwxrwx---    2 embedded embedded      4096 Mar 12 21:53 54fd66fdbd1f46071996b985
drwxrwx---    6 embedded embedded      4096 Mar 12 12:07 54fd720c591f467c6196b991
drwxrwx---    8 embedded embedded      4096 Mar  9 21:27 54fd75569ac07c7e7896b97e
drwxrwx---    3 embedded embedded      4096 Mar  9 21:37 54fd77969ac07cf43196b97e
# pwd
/replicator/storagePools

Anyways, under these UUID named directories is the data that you would get under each of your shares etc.

I also pulled down a copy of the diag logs and reviewed those too, and noticed that rootfs for the Transport Sync is downloaded from connected data and is used to build the OS onto the external drive etc (including a network boot prior to that happening). Determined that Transport Sync when in a labmode or beta state would have ssh daemon enabled. Worked this out from the init.d scripts (nS50ssh) contained in the rootfs that was downloaded (rootfs-3.1.9.17914.tar).

Source: /etc/init.d/nS50ssh

start() {
    if /usr/bin/in-labmode || /usr/bin/is-beta || [ -e /replicator/configuration/ssh ] ; then
 	    echo -n "Starting sshd: "
	    /usr/sbin/sshd -f /etc/ssh/sshd_config -h /etc/ssh/ssh_host_rsa_key
	    touch /var/lock/sshd
	    echo "OK"
    fi
}

I made some tweaks so that when I unmounted the drive and attached back onto my Transport Sync, it would boot and enable ssh. So I could dig a little more around, and can see the architecture information below of the system;

# cat /proc/cpuinfo
Processor       : ARMv6-compatible processor rev 4 (v6l)
BogoMIPS        : 239.20
Features        : swp half thumb fastmult vfp edsp java
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xb02
CPU revision    : 4

Hardware        : Connected Data CNS3411 Portal Board
Revision        : 0000
Serial          : 0000000000000000
# uname -a
Linux cd_haven 2.6.35.12-cavm1 #14 Tue Jun 24 14:51:41 PDT 2014 armv6l GNU/Linux

The program that seems to do all the heavy lifting is “replicator” and this is something connected data has developed, and has ties to Drobo as far as I can tell also.

I recently installed Windows 10 RC and installed the Transporter Desktop client on and was surprised to see that the Transporter Library shows up in Windows 10 RC as a drive letter, this was pretty cool I thought. So my D: drive on Windows 10 RC after installing the client is associated to the Transporter Library folder.

Simpana 10 – MySQL on Windows Server 2008 R2 – Video example

Just a quick demo of installing MySQL on Windows Server 2008 R2, followed by the installation of Simpana 10 MySQL iDA.

We also enable Binary Logging on MySQL so we can perform Log backups, configure the MySQL instance in Simpana and run some backup tests.

You can watch the following video for the demo/example.

If you find this video of benefit, please leave feedback. Thanks.

Simpana 10 – PostgreSQL on Windows Server 2008 R2 – Video example

I know it’s been a little while since I did one of these, but I suddenly found myself in a position where I could do one so I quickly put this together.

Unfortunately its in 3 parts, so if you watch the 3 clips below end to end it will make sense.

Part 1 of 3

Part 2 of 3

Part 3 of 3

You’ll notice I even made some mistakes in part 3, but I worked with it and used it to demonstrate how you can troubleshoot this type of issue.

If you find this valuable and/or have any questions, just drop me a comment. Always love to hear feedback.

HTS Tvheadend 3.9.2332 screen capture

I reinstalled Ubuntu 14.04 Server on to my Intel NUC I have and hooked up my Realtek USB tuner.

I was surprised to see that HTS Tvheadend has changed a bit since I last had it installed. Was happy to see some really nice GUI enhancements. Great work done by the developers.

Screen capture below from my Finished Recordings tab.

tvheadend-finished-recordings

Google Drive Direct Download of large files

If your in a situation like I just described on the subject, you will be thanking me and the person I am about to link too.

I was in a situation today where I needed to download some large files from Google Drive, but needed it to be done from the Linux command line with wget. I could see how it worked, but didn’t have the time to sit down and figure it out exactly. After a bit of searching I came across a blog post here by someone else. They already did the hard yards and wrote a perl script.

All I can say is a big thank you to the author on that blog who wrote the gdown.pl perl script. It worked great. Saved me a few hours in fact.

In addition I also worked out that if you take the id=value from the Google Drive File Share link, you can data mine a bit of information associated to the file in question.

Check out the page here, and use the Try it link. Insert in you File ID and it will tell you from which account its being shared etc.

 

Simpana 10 – Enabling Pre-Check Operations During Update Installation

I wasn’t aware of this until recently, so it was a good thing to find…..

In Simpana 10 a number of Pre-Check Operations performed during the push of Updates from the Commserve are disabled by default.

Those Pre-Check Operations are outlined below;

  • Client connectivity check – Verifies whether the client can communicate with the CommServe.
  • Disk space check – Verifies whether the client has sufficient disk space for the updates to be installed.
  • Package synchronization – Verifies whether the updates are already installed on the client.

Of course since one of them is Disk space check, imagine what happens during an SP push update to a Unix client whereby you don’t have enough disk space to perform the Update operation on client. The client may end up corrupting the existing Simpana software on the client due to the SP update failing due to lack of space if it gets enough of the way through the progress. I saw this first hand recently on a client.

The regkey to set that will enable the Pre-Check Operations to be performed in the future is below;

nNoConnectionLimitsForPushUpdate

I confirmed that Pre-Check Operations didn’t occur as I saw the following log lines during my Push of an SP against a client. The log named below is found on your Commserve.

DistributeSoftware.log

6196  3914  12/17 09:45:47 4825406 [UpdatePatches]() - Initiating update patches on client
6196  3914  12/17 09:45:47 4825406 [UpdatePatches]() - Skipping disk space check for client to reduce client connections

As always, if my post helps you in some way, please drop me some feedback by way of a comment. Love hearing from people.

Source: Simpana 10 – Books Online Documentation.

lxc on Ubuntu 14.04 LTS

I’ve done a quick install of lxc on Ubuntu 14.04 and the only difference I have found so far since I did this back on Ubuntu 12.04 per the post here, is that lxc-list is no longer applicable and been replaced by lxc-ls instead.

I find lxc-ls doesn’t give much information at all, if ran without any switches, as per the example below;

root@papa:/var/lib# lxc-ls
web1

However, if you run it with the -f switch it outputs more information which clearly shows important details about the lxc containers.

root@papa:/var/lib# lxc-ls -f
NAME STATE IPV4 IPV6 AUTOSTART
------------------------------------
web1 STOPPED - - NO

Will continue to test this out further, but so far so good. Pretty impressed with Ubuntu 14.04 LTS so far.

Ubuntu 14.04 Server with TvHeadend and Realtek RTL2832U USB tuner

If you seen my previous posts here and here. I can confirm that the instructions I provided on the post here are still applicable to the installation of TvHeadend on Ubuntu 14.04 Server.

I just installed Ubuntu 14.04 Server tonight and tested the installation process of TvHeadend per my other notes and it works fine.

Ironically enough so far I really like Ubuntu 14.04 Server, so I will leave it running for a bit and see how much I do like it after a few days and/or weeks.

rtorrent and rutorrent on Debian 7.5 Wheezy

I was hoping to install rtorrent and rutorrent on Debian 7.5 Wheezy, however it appears although I was referencing the page here, it appears to not work that simply.

i.e. Appears the installation of the two packages below causes some messed up packaging conflict and it seems to be in some sort of dead lock. I couldn’t get past it so I had to give up for now.

Problem packages causing packaging conflict is these;

  • libcurl4-openssl-dev
  • libssl-dev

Will give it another go in a few weeks time. I’ll also continue to do a bit of research to see if anyone has had success with this and how they did it.

Ubuntu 12.04.4, TvHeadend and Realtek RTL2832U USB tuner

This week I setup an old Dell Optiplex 755 tower with Ubuntu 12.04.4, TvHeadEnd and Realtek RTL2832U USB tuner to perform some DVB-T recordings. The installation I performed of TvHeadEnd is the exact same one I documented some months back when I used the same USB tuner on a Raspberry Pi. You can read about it here.

The installation was flawless and simple as you’d expect. The system has been running a few days now and capturing what I want. It also allows me to point VLC client on other machines at the system to network stream any of the DVB-T channels the tuner can tune against (also shown in the previous post linked above).

Thinking of buying another tuner to be honest, so I can record from 2 different channels that don’t share the same stream/multiplex id.

Simpana 10 – advanced client properties – firewall – outgoing routes UI change

Noticed that the Simpana 10 Advanced client properties – firewall – outgoing routes user interface changed a little between SP5a and SP6. It may have happened on SP5b, but I couldn’t test at the time, but saw the difference certainly between our jump from SP5a to SP6.

Check out the pictures below to see what I mean. Looks like they relabelled some items.

Advanced Client Properties - Outgoing routes - SP5a
Advanced Client Properties – Outgoing routes – SP5a
Advanced Client Properties - Outgoing routes - SP6
Advanced Client Properties – Outgoing routes – SP6

Simpana 10 – PostgreSQL 8.4 backup on CentOS Linux 5.10 x64 – example

I am going to assume that this is a test deployment and as such will expect that you have installed your CentOS 5.10 x64 Linux the way you want it, and I will follow on from that point on what I needed to perform to get the distribution release of PostgreSQL to work with Simpana 10 PostgreSQL iDA to perform a backup. Of course some assumed knowledge present.

  1. Install the postgresql packages onto your  CentOS client.
    $ sudo yum install postgresql84 postgresql-server postgresql-devel
  2. Startup postgresql server for the first time, you need to run initdb switch instead of start for the first time only.
  3. $ sudo service postgresql initdb
  4. We should also enable the service to run at boot moving forward
    $ sudo chkconfig postgresql on
  5. Before we change the authentication method below, we need to set a password that we know for the postgres user in the postgresql database. To perform this we need to change to the postgres user and connect to postgresql database and update the password for the user to something we know.
    $ sudo su –
    # su – postgres
    $ psql
  6. Now at the postgres prompt update the password for the postgres user, unless you want to make your own. Won’t discuss how, just going to show how to set postgresql user password. Be sure to remember what you set the password too, it will be required later on.
    postgres=# ALTER USER postgres WITH PASSWORD ‘password’;
    ALTER ROLE
    postgres=#q
  7. Postgresql packages distributed with CentOS don’t use md5 password authentication, it defaults to peer/ident based authentication. In this example we will flip this to md5 based authentication, and we will touch on a peer/ident based authentication example in a later post. Perform the changes below to enable md5 authentication.
    $ cd /var/lib/pgsql/data
    $ sudo vi pg_hba.conf
    Find the line at the bottom of the file that looks like the one below;
    local     all     all                ident
    You need to change this to have md5 on the end, i.e. replace ident to be md5 instead. Save the changes.
  8. Now restart postgresql for the changes to take effect. (required)
    $ sudo service postgresql stop
    $ sudo service postgresql start
  9. Now you can test that this has worked by execution as root the command below, and when prompted for the postgres user password authenticate using the password set in step 6.
    # psql -U postgres
    If it worked, you will get the famous postgres=# prompt, in which you can enter q [enter] to quit it.
  10. Next up we now need to enable archive logs. We need to edit the postgres.conf file which on CentOS rpm based install is /var/lib/pgsql/data and the lines we need to add in the Archiving section is below;
    archive_mode = on
    archive_command = ‘cp %p /var/postgresql/archive/%f’
    Save those additions and move on below.
  11. Make sure to create the folders/destination used in the archive_command above and ensure postgres user can write to it etc.
  12. Now restart postgresql for the changes to take effect. (required)
    $ sudo service postgresql stop
    $ sudo service postgresql start
  13. Install the Simpana PostgreSQL iDA.
  14. Once installed refresh the Simpana Console and attempt to create your PostgreSQL instance. See the dialog below for the values I used in this configuration. Of course the username is the postgres user and password we configured in step 6. Note the archive log directory is the one we used in the archive_command string at step 10 too.simpana_10-centos-5.10_x64-postgresql_instance_creation
  15. If everything goes to plan you should have your instance created and now you can do configuration against the DumpBaseBackupSet subclient and/or FSBasedBackupSet subclient. For the difference between what each does, I recommend you review the documentation. As each backupset has its own unique capabilities. See the bottom of the Backup documentation page for explanations.
  16. Assign a Storage Policy to each subclient and run a backup of each to confirm it works.

CommVault Documentation references:

Simpana 10 – Linux client prepost command execution failure

Came across an interesting condition today, which took me a bit of testing to identify why the job would go into a pending state. This one relates to Simpana 10 on a Linux client where you have a File System iDA with a PrePost command being executed. In my test below the script is doing nothing special, it’s merely to have something to execute to show the behavior. I’ve provided it below purely for reference.

[root@jldb1 bin]# cat pre-scan.sh
#!/bin/sh
# test
#

echo $1 $2 $3 $4 $5 $6 $7 $8 $9 >> /root/pre-scan.log
exit 0

Job goes pending and produced the following errors and output below;

JPR (Job Pending Record)
Error Code: [7:75]
Description: Unable to run [/usr/local/bin/pre-scan.sh] on client.
Source: jwcs, Process: startPrePostCmd

simpana_10-linux-prepost-command-execution-failure

[JobManager.log – commserve]

3024  d88   03/27 18:16:26 21  Scheduler  Set pending cause [Unable to run [/usr/local/bin/pre-scan.sh] on the client.                 ]::Client [jwcs] Application [startPrePostCmd] Message Id [117440587] RCID [0] ReservationId [0].  Level [0] flags [0] id [0] overwrite [0] append [0] CustId[0].
3024  118c  03/27 18:16:26 21  Scheduler  Phase [Failed] message received from jwcs.lab.heimic.net] Module [startPrePostCmd] Token [21:3:1] restartPhase [0]
3024  118c  03/27 18:16:26 21  JobSvr Obj Phase [3-Pre Scan] for Backup Job Failed. Backup will continue with phase [Pre Scan].

[startPrePostCmd.log – commserve]

4940  e4c   03/27 20:21:46 ### Init() - Initializing job control [token=21:3:7,cn=jwcs], serverName [jwcs.lab.heimic.net], ControlFlag [1], Job Id [21]
4940  e4c   03/27 20:21:47 ### Cvcl::init() - CVCL: Running in FIPS Mode
4940  e4c   03/27 20:21:48 ### CVJobCtrlLog::registerProcess(): successfully created file [C:Program FilesCommVaultSimpanaBaseJobControl4.940]
4940  e4c   03/27 20:21:48 ### ::main() - jobId 21 - restoreTaskId = 0
4940  e4c   03/27 20:21:48 ### ::main() - jobId 21 - adminTaskId = 0
4940  e4c   03/27 20:21:48 ### ::getBackupCmdAndMachine() - jobId 21 - before construct application id
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - appTypeId = 29
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - jobId 21 - symbolic AppId = 2:20
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - jobId 21 - prePostId = 1
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - jobId 21 - preifind cmd = /usr/local/bin/pre-scan.sh
4940  e4c   03/27 20:21:49 ### ::main() - jobId 21 - commandPath = /usr/local/bin/pre-scan.sh
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - before execute cmd
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - Use Local System Acct.
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - remoteexename = [/usr/local/bin/pre-scan.sh]
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - args = [ -bkplevel 1 -attempt 7 -job 21]
4940  e4c   03/27 20:21:49 21  executePrePostCmd() -  Attempting to execute remote command on client [jldb1]..
4940  e4c   03/27 20:21:49 21  executePrePostCmd() - jobId 21 - Received error text from server cvsession [Unknown Error]
4940  e4c   03/27 20:21:49 21  executePrePostCmd() - jobId 21 - Error [0] returned from executeRemoteCommand /usr/local/bin/pre-scan.sh
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - MsgId[0x0700004b], Arg[1] = [117440623]
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - MsgId[0x0700004b], Arg[2] = [/usr/local/bin/pre-scan.sh]
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - MsgId[0x0700004b], Arg[3] = []
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - [MsgId[0x0700004b][]: [3] Args Pushed, [1] Args expected.
4940  e4c   03/27 20:21:49 21  ::exitHere() - jobId 21 - Exiting due to failure.
4940  e4c   03/27 20:21:49 21  BKP CALLED COMPLETE (PHASE Status::FAIL), 21. Token [21:3:7]
4940  e4c   03/27 20:21:53 21  ::exitHere() - jobId 21 - startPrePostCmd Terminating Event.
4940  238c  03/27 20:21:53 21  CVJobCtrlLog::unregisterProcess(): successfully removed file [C:Program FilesCommVaultSimpanaBaseJobControl4.940]

[cvd.log – client]

30846 427e0940 03/27 20:21:50 ### [CVipcD] Requests from non-CS with hostname [jwcs.lab.heimic.net] and clientname [jwcs] to execute in user entered path are not allowed

I worked out this problem is caused by lack of value in regkey sCSGUID as found in the location below;

/etc/CommVaultRegistry/Galaxy/Instance001/CommServe/.properties

Sample below;

[root@jldb1 ]# cat /etc/CommVaultRegistry/Galaxy/Instance001/CommServe/.properties | more
bCSConnectivityAvailable 1
sCSCLIENTNAME jwcs
sCSGUID
sCSHOSTNAME jwcs.lab.heimic.net
sCSHOSTNAMEinCSDB jwcs.lab.heimic.net

sCSGUID should be populated and its lack of value causes this condition with pre-scan script execution.

Fix:

Easiest method to recreate this regkey value is to do a local uninstall of the simpana services on the client. Revoke the client certificate in Simpana Console via Control Panel – Certificate Administration for the client in question. Followed by a reinstall.

Observation:

Subclients that have no scripts being executed as part of the backup will run fine if this regkey value is missing. You will never see a problem until you add a script. In addition, clients that have a simpana firewall configuration will be broken and subclients without scripts will break too. As the regkey value is used for simpana firewall configuration exchange I believe based on my testing.

Hope you enjoy my post… drop me a comment if you like the content and/or it helps you.