Mediawiki site generating Warning: Invalid argument supplied for foreach() in LocalisationCache.php on line 390

Had to blog about this, as I had a problem with one of my MediaWiki sites that I moved onto a shared hosting provider.

Every page when loaded would generate the error below displayed somewhere on the page. i.e in my case at the top in my header region.

Warning: Invalid argument supplied for foreach() in /path/to/file/includes/cache/LocalisationCache.php on line 390

The fix for this is easy, just run the rebuildLocalisationCache.php script in maintenance folder and if you don’t have access to run it via a shell you can add the following line into your LocalSettings.php file instead;

$wgLocalisationCacheConf[‘manualRecache’] = true;

Check out the page here for more specific details on the script use etc.

Ubuntu 12.04.2 LTS and lxc continued

This is a continuation of my previous post here, but with more detail. In case you want to jump straight in and have a play with lxc (Linux Containers) on Ubuntu 12.04.2 LTS.

I moved to Ubuntu 12.04.2 LTS purely cause lxc seemed to be workable out of the box, thus I left behind Debian 7 for now.

Getting Started.

Installation of lxc on Ubuntu 12.04.2 LTS is a simple as running the command below;

apt-get install lxc

It will install what is needed and even configure the cgroup mount that is required (a manual step of Debian). This will install and configure a NAT 10.x.x.x network device called lxcbr0 on your host, which ALL templates use when you attempt to setup other linux containers on your host.

If you want bridged network for your linux containers, i.e. share the same network used by your hosts ethernet device, you can do the following. Requires installation of bridge-utils and configuration change to your network file.

apt-get install bridge-utils

Next you need to configure /etc/network/interfaces to ensure that your network device is now configured for bridged networking. In my case, I wanted my eth0 to be the bridged device, so you hash out all eth0 networking references. Create the additional lines below in the file;

auto br0
iface br0 inet static
 bridge_ports eth0
 bridge_fd 0
 bridge_maxwait 0
 address 192.168.4.10
 netmask 255.255.255.0
 network 192.168.4.0
 broadcast 192.168.4.255
 gateway 192.168.4.254
 dns-nameservers 192.168.4.254
 dns-search lan.heimic.net

As you can see above, I have configured a static IP assignment. If my eth0 was using 192.168.4.10, I’ve now taken it to use on br0 and would of hashed out all eth0 related configuration. Restart networking (and/or simply reboot). Be sure to have access to the machine should you break it and need to fix it via a console.

Creating Container

The command to create a container is easy, and below is a sample.

lxc-create -n lxc1 -t ubuntu

This says to create a linux container named (-n) lxc1 and use template (-t) ubuntu. This will end up being a Ubuntu 12.04.2 LTS container. Default location is /var/lib/lxc by the way, you could change this by creating a symlink to where you want them or changing the lxc configuration accordingly.

When it’s completed creating you will get told that the account to logon is “ubuntu” and password is “ubuntu”, be sure to change it.

If you want your container to make use of the bridged network and not the NAT based one which the templates default too.

Find the config file associated with your new container, if your using the default location still it will be;

/var/lib/lxc/lxc1/config

Edit the file and find the line that says “lxc.network.link=lxcbr0” and change it to be “lxc.network.link=br0” and save the change.

Starting Container

To start the container you just created issue the command below;

lxc-start -d -n lxc1

Once again name (-n) is passed, the -d tells it to go in background. If you don’t do this it will boot and show you the output. Good for troubleshooting, so drop the -d if you have problems. Note I haven’t worked out a way to exit from the container when I don’t pass -d, so you might have to kill your ssh session and/or halt the container to get your terminal session back.

Container Console

If you start the container using -d, you can access it’s console via the command below;

lxc-console -n lxc1

At which point you will get the logon banner for the console of the container. Logon now using the details you got during creation. Change the password.

At this point you can make changes to the linux install as needed, just like it was a normal physical install on its own dedicated hardware.

To exit the lxc-console, as it will have stated is control a + q.

Stopping Container

To shutdown down a container, you issue the command below;

lxc-halt -n lxc1

Where -n is the name of the container as always. See the trend with the commands.

Container Autostart

If you want to have the container autostart when the host is rebooted, you should go into /etc/lxc/auto and create a symlink to your containers config file. By default on Ubuntu 12.04.2 LTS this directory is looked at during system startup and any container configs found will have them autostart. Below is an example from my own environment;

root@alpha:~# cd /etc/lxc/auto
ln -s /data0/lxc/bravo/config bravo
root@alpha:/etc/lxc/auto# ls -la
total 8
drwxr-xr-x 2 root root 4096 Aug 1 13:59 .
drwxr-xr-x 3 root root 4096 Jul 31 20:59 ..
lrwxrwxrwx 1 root root 23 Aug 1 13:59 bravo -> /data0/lxc/bravo/config

If this has worked, when you run the command below, you will see the word (auto) next to the name of the container that will be start automatically when host reboots.

root@alpha:~# lxc-list
RUNNING
 bravo (auto)

FROZEN

STOPPED
 vm0

Host/Container sharing mounts/file systems

If you’d like a filesystem from your host to be available on the container, you need to have the container use a bind mount and have it come up during container start and removed during container shutdown. DO NOT MAKE THE BIND MOUNT STATIC ON HOST via /etc/fstab, as I found when I lxc-destroy my container, that it will remove data from any bind mounts.

Best way to describe the bind mount is to provide an example and what to populate in the containers config file. See below;

root@alpha:/var/lib/lxc/bravo# cat config | grep lxc.mount
lxc.mount.entry                         = /data1/cifs/backup data1/backup none defaults,bind 0 0

Which means /data1/cifs/backup from my host will be mounted at /data1/backup on the container.

root@alpha:/data0/lxc/bravo# df /data1/cifs/backup
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/alpha1-data1 3845456920 1166862816 2639526500 31% /data1

on container shown as;

root@bravo:~# df /data1/backup
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/alpha1-data1 3845456920 1166862816 2639526500 31% /data1/backup

Hope this helps get someone started, as this information was found by research, reading and putting into practice.

 

SDR and paging messages

Following from my other post here, I’ve managed to finally secure a cable I needed to hook up my USB DVB-T tuner to an outside antenna. I’ve subsequently configured my Raspberry Pi as per the forum post here with the required software.

Basically I have enabled SDR (Software Defined Radio) and using it to decode paging messages on the paging network.

I setup the tuner on Windows at first and used SDR# to find the frequency range. A friend told me roughly where to look and what to listen out for. He said you’d hear bursts every so often. Sure enough I did. I found based on my location in South West Sydney that 148.630 was the band that enabled me to get what I needed.

If you get it working, you will see output like so from the command line below;

Command:
rtl_fm -f 148.630M -s 22050 | multimon-ng -t raw -a POCSAG512 -a POCSAG1200 -a POCSAG2400 -f alpha /dev/stdin

Output:
POCSAG1200-: Alpha: 1957499:XMPROD PET_SYD_WATER_2 Regular heartbeat 1 to VHA

It appears that the paging network is still very much used by many. From looking at what comes over it, you can see that the medical industry uses it, in addition to other corporations for notifications etc.

I should add I found the following page helpful too. Click here.

Ubuntu 12.04.2 LTS and lxc

Have completed my Ubuntu 12.04.2 LTS install and configured lxc (Linux Containers). I am so far very impressed just how easy it was to get this working out of the box. I think the Ubuntu team who produce Ubuntu 12.04.2 LTS has to be given a big clap. Very fine job.

I’ve install some Debian squeeze lxc’s and Ubuntu ones. All seem to work great and I will post more details soon on what I have done and how it was performed, as it might help anyone thinking of doing the same.

I wanted to do this so I could run some other software on the containers and not clutter the host install. Although the host will see the processors for the container etc. But that’s fine and expected on how lxc works.

My aim was to install Mythtv as a master backend into a container and have it use my HDHomerun network based tuner. This has actually worked, and I am currently running it now, however I noticed during reboot/auto start of the container that the mythtv-backend wouldn’t start. Turns out the upstart configuration is not going to work on a linux container. Wiki page here is the link to the config that ships in Ubuntu 12.04.2, and below is my change I made so that I could get it to start automatically, it’s a hack and needs some further investigation, but I was in a rush to get it working in my environment.

root@delta:~# cd /etc/init
root@delta:/etc/init# cat mythtv-backend.conf | grep start
#start on (local-filesystems and net-device-up IFACE!=lo and started udev-finish)
start on net-device-up IFACE!=lo

As per above, I hash out the original start on line and create the amended one below it. This is perform in the file /etc/init/mythtv-backend.conf

Now it will start correctly in my container at boot.

Asterisk and Voice Over IP

Purchased a humble Linksys/Cisco IP phone SPA922 and installed it earlier in the week. However wasn’t able to use it until the power brick I ordered arrived. It came later in the week.

Been a long time since I installed asterisk, but I have fond memories that it was very awesome once you got it working. I struggled a little, but somethings were easy and others not so. Thanks to a friend who helped I managed to configure everything I needed and even got it to talk to my provider MyNetFone which I purchased a DID from to use from Sydney 02 region.

After tinkering Friday evening I got the inbound working and outbound working. My next step was to get IVR working on the inbound calls. I got this done on Saturday morning and evening.

Very happy with the setup so far, have subsequently copied over my configuration and voice files onto a Raspberry Pi where I have setup raspbian and installed asterisk. I choose to run it like this as I found the pabx based distro with the nice web interface confusing. What I have got works, and works well so far more me. Will continue to work/tweak it.

So far so good and very impressed.

Confluence 5.1.4 installation and configuration on Linux

Over the past 2 years or more I’ve become a bit of an expert and/or go to person for Confluence. Below is my notes on how I would do a Confluence 5.1.4 deployment, please note it will not be a detailed process. We will have some assumed knowledge.

Prerequisites:

  • Linux client/server installed
  • apache2 installed
  • mysql installed (I use mysql still)

Download the Confluence 5.1.4 bin file and copy it to your client/server. Since we will be using mysql, we should also download the mysql connector per the Confluence documentation. See step 6 on this page here.

1. Create the mysql database that you will use for your confluence;

mysql> create database confluence_wiki character set utf8 collate utf8_bin;

2. Create a mysql user with a password and provide it the privileges required to access the database above;

mysql> create user ‘conf’@’localhost’ identified by ‘password_string’;
mysql> grant all privileges on confluence_wiki.* to ‘conf’@’localhost’;
mysql> flush privileges;

3. You can check the account privileges via the command below;

mysql> show grants for ‘conf’@’localhost’;

4. Now chmod 755 the Confluence 5.1.4 bin file you got and execute it. Run through the installation and define your install path and confluence data path.

5. Extract the contents of the mysql connector archive and copy the jar file to the path below;

<Confluence installation>/confluence/WEB-INF/lib

6. Before you now point your browser at the Confluence to complete the setup, I would recommend you stop and start it. Below is the scripts you can use to perform this.

<Confluence installation>/confluence/bin/stop-confluence.sh
<Confluence installation>/confluence/bin/start-confluence.sh

7. Now you can point your browser at the URL stated back at the end of step 4. Complete the installation as needed, following the Confluence installation documentation.

If you’d like for your users to be able to access the Confluence via an apache2 vhost, without the need to remember the servers name/ip and port for the Confluence installation you can do the steps below.

Prerequisites

Ensure that you have the DNS setup for the vhost, so clients go sent to this webserver.

1. Setup the vhost configuration file in Linux distribution specific way. i.e. Debian is /etc/apache2/sites-available

Sample vhost configuration file;

<VirtualHost *:80>
ServerName wiki.domain.com
ServerAlias wiki
ServerAdmin webmaster@domain.com

ProxyPreserveHost On

<Proxy *>
Order deny,allow
Allow from all
</Proxy>

ProxyPass / http://localhost:8090/
ProxyPassReverse / http://localhost:8090/

ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

If my vhost was called wiki.domain.com, it would send all client connections to wiki.domain.com to the localhost:8090. Thus serving out the default confluence installation just completed prior above. If you have multiple confluence installs on the client/server, you would just make as many vhosts as you need and modify the port on the end to point to each unique confluence install.

2. Ensure you have installed the proxy* modules for apache2 and have them enabled. i.e. Debian is the location below;

root@bravo:/etc/apache2/mods-available# ls -l proxy*
-rw-r–r– 1 root root 87 Mar 3 23:07 proxy_ajp.load
-rw-r–r– 1 root root 355 Mar 3 23:07 proxy_balancer.conf
-rw-r–r– 1 root root 97 Mar 3 23:07 proxy_balancer.load
-rw-r–r– 1 root root 803 Mar 3 23:07 proxy.conf
-rw-r–r– 1 root root 95 Mar 3 23:07 proxy_connect.load
-rw-r–r– 1 root root 141 Mar 3 23:07 proxy_ftp.conf
-rw-r–r– 1 root root 87 Mar 3 23:07 proxy_ftp.load
-rw-r–r– 1 root root 89 Mar 3 23:07 proxy_http.load
-rw-r–r– 1 root root 62 Mar 3 23:07 proxy.load
-rw-r–r– 1 root root 89 Mar 3 23:07 proxy_scgi.load
root@bravo:/etc/apache2/mods-available# pwd
/etc/apache2/mods-available

They can be enabled by running “a2enmod proxy_ajp” etc.

Be sure to restart apache2 when all done.

3. If you have everything configured correctly, you should be able to access http://wiki.domain.com and get set to the confluence install you completed previously. I would recommend updating the hostname in the confluence console to use the vhost if it all works, so any links that get generated now use the vhost.

— That concludes things.

 

Converting Kogan PVR ts files into mpeg streams on Mac OSX

If you own a Kogan TV which has the PVR feature built in, did you know you can take those recordings and convert them into mpeg streams and/or other formats post removing of adverts on Mac OSX.

Software Required:

You’ve already recorded the program on your Kogan TV using the PVR feature. Now take the external drive used to record on and connect to the Macintosh. Copy the *.TS file from the media and place somewhere. I like to create a folder. Once copied, you can eject the media from the Macintosh.

ProjectX is now used to demux the *.TS (transport stream) file and this will produce some additional files. The two important ones are *.mp* and *.mp*.

Using MPEG Streamclip open the *.mpv file, now you can cut out the adverts and anything else, so your only got the program you want minus all the extra stuff. Use save as to save out the finished product. This will save out a *.mpeg file which can now be used to convert into other formats as required with other tools. i.e. ffmpegx etc.

Simpana 9 – Check Readiness on Windows, Linux and Macintosh passes although backups fail

Came across an interesting thing today and thought I would publish it, as it could certainly catch people out.

You’ll find the Client Readiness Check will pass on Windows, Linux and Macintosh clients (those I have checked so far), although you backup fails against the hosts with the error similar to below at the 5% mark.

Error Code: [19:599] 
Description: Loss of control process ifind.exe. 
Possible causes: 
1. The control process has unexpectedly died. Check Dr Watson log or core file.
2. The communication to the control process machine wentx862k8-1 might have gone down due to network errors. 
3. If the machine wentx862k8-1 is a cluster, it may have failed over. 
4. The machine wentx862k8-1 may have rebooted.

And readiness check against the client will pass as per below;

It appears the readiness check doesn’t actually check all services/processes on the client required to perform the backup. i.e. In this situation, I killed the EvMgrC process, which is certainly required, it’s as important the cvd process for the client to function and be backed up. Without it, the backup will fail to work, however as you can see Readiness Check will pass for the client, as it looks like EvMgrC is not actual checked.

Of course it’s rare for this EvMgrC to be unavailable like this, however it certainly could happen and cause confusion when client readiness check is performed and clearly says everything is okay.

On Windows clients be sure to check that the essential services are running, and on Unix platforms run the following command to ensure the key items have an associated PID.

# cd /opt/simpana/Base
# ./simpana list
+---------------------------------+---------+----------------------------------+
| Service name                    |   PID   | Service command                  |
+---------------------------------+---------+----------------------------------+
| cvlaunchd                       | 4020    | /opt/simpana/Base/cvlaunchd      |
+---------------------------------+---------+----------------------------------+
| cvd                             | 4091    | /opt/simpana/Base/cvd            |
+---------------------------------+---------+----------------------------------+
| EvMgrC                          | 4085    | /opt/simpana/Base/EvMgrC         |
+---------------------------------+---------+----------------------------------+

Hope this helps anyone in the internet that might come across this. This type of failure can be seen with the following error message reported in the FileScan.log on the client too as outlined below, good indication of the issue probably being as described.

3044 be8 11/15 19:28:01 ### EvSocket::doConnect() - Could not connect to wentx862k8-1(wentx862k8-1):EvMgrC: Connect to 127.0.0.1:8402 failed: Connection refused

Although it appears that Windows clients appear to recover the EvMgrC process after a period of time, so it should correct itself, however not seen that same behaviour on Linux and/or Macintosh clients as yet. Will keep investigating if they too recovery it.

CommVault Simpana Linux client can backup but not restore

Had an interesting issue with a Linux client that was running CommVault Simpana where it could backup fine but any attempts to restore would not work. Problem is the error indicate network/comms related. Although wasn’t the case.

Errors indicated kernel parameters, upon further investigation it was indeed kernel parameters. The errors in the logs are shown below for reference;

ClRestore.log
24346 407db90  11/09 12:20:28 105 [PIPELAYER  ]  Pipeline not Created Yet or is missing. retrying...
24346 407db90  11/09 12:20:46 105 [PIPELAYER  ] ERROR: Error: Received Message type=7 on sd=13
24346 407db90  11/09 12:20:46 105 CPipelayer::InitiatePipeline() - Error initiating pipeline!  plInitiatePipeline returned -1
24346 407db90  11/09 12:20:47 105 CCVAPipelayer::StartPipeline() - Failed to initiate pipeline
24346 407db90  11/09 12:20:47 105 CVArchive::StartPipeline() - Startup of DataPipe failed

And…

Cvd.log
12462 b7efd6d0 11/09 10:28:15 ### [CVD        ] IPCKEYS Path=//opt/hds/Base/Temp/1320798495_5482_55344016, curr=1, dest=2, topid=0
12462 b7efd6d0 11/09 10:28:15 ### [CVD        ] IPCKEYS key[0]=0x540e0055, key[1]=0x0a0e0055, key[2]=0x0b0e0055, key[3]=0x090e0055, ReaderKey=0x3b0e0055
12463 b7f4e6d0 11/09 10:28:15 ### [CVD        ] IPCKEYS Path=//opt/hds/Base/Temp/1320798495_5482_55344016, curr=2, dest=3, topid=0
12462 b7efd6d0 11/09 10:28:15 ### [CVD        ] ERROR: initIpc: shmget() err 22, flag 950, size 2007080
12463 b7f4e6d0 11/09 10:28:15 ### [CVD        ] IPCKEYS key[0]=0x540e0055, key[1]=0x0b0e0055, key[2]=0x0c0e0055, key[3]=0x090e0055, ReaderKey=0x3b0e0055
12462 b7efd6d0 11/09 10:28:15 ### [CVD        ] ERROR: plInitIpc: initIpc() fail err 22
12463 b7f4e6d0 11/09 10:28:15 ### [CVD        ] ERROR: initIpc: shmget() err 22, flag 950, size 2007080
12463 b7f4e6d0 11/09 10:28:15 ### [CVD        ] ERROR: plInitIpc: initIpc() fail err 22

I took another look at the kernel parameters set for some key kernel.* items;

kernel.sem = 500	64000	64	256
kernel.msgmnb = 65536
kernel.msgmni = 16
kernel.msgmax = 65536
kernel.shmmni = 8192
kernel.shmall = 0
kernel.shmmax = 0

Of course those last 2 caused me concern, got these modified and the the client could restore.

Turns out, the host in question although a 32bit Linux host with a PAE kernel had some how had the sysctl.conf that appeared to have values for these two kernel parameters with integers from the 64bit initscripts package. As such the values had been so large, that when applied by sysctl -p at boot, they wrap to 0. Hence the output as seen above.
From 64bit sysctl.conf file;

..
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

From 32bit sysctl.conf file;

..
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456

Anyways, problem solved. Talk about a bit of research and testing to find the culprit on that one. Don’t ask me how the host got the 64bit sysctl.conf, I can only assume they got pushed via a scripted change, as I can’t see a bug filed against initscripts for RHEL 5.7 for this. I can only assume it was introduced by script and/or manually.

Firmware upgrade on motherboard renders Windows 7 boot with BSOD

I upgraded my motherboard firmware, and thought nothing of it. Turns out I should of been more concerned. Windows 7 now won’t boot and produces a blue screen of death (BSOD). Followed by the automatic reboot.

Oh well, been wanting to reinstall the Intel i5 system now for a few weeks on the account of some USB audio headset issues. So if the reinstall doesn’t resolve that fault, I will render the USB audio headset to the bin, and buy a new one.

Guess I got a bit of work ahead of me this week each evening. Glad I got a 2nd drive in the system, currently reinstalling onto it, and will pull the little data I had on the old system drive. Anything else needed is already on the 2Tb data drive in the system, which is not affected by the reinstall anyways.

Move over screen, we have a new player in town

As long as I have used Linux, I have been a huge fan of GNU/Screen. Although it appears we now have a new player in town.

Let me introduce Tmux, highly recommend if you haven’t checked it out, do so.

I too just happened to stumble across a post by another blogger, at the link here which introduced this as an alternative. Had I not seen the post, I wouldn’t know it even existed.

Password Safe

I’ve been using password safes now for a while, however I still hadn’t find the right product to work across the multiple platforms. i.e. I need something to work on a mac, pc, ipad and andriod based phone.

I’ve always been a big fan of KeePass, however just as of today I have gone back to using the simple Password Safe windows application. As I found that a neat iPad product pwSafe exists to read it’s files. In addition, the someone has made an andriod application too. Now I cant have the file available on all platforms. pwSafe can be upgraded for $1.99 to use Dropbox to grab the file and sync etc.

EDIT: The only compatible KeePass app I could find on iTunes was an old app that didn’t look great on the iPad. This was a factor moving away from KeePass format too.