Wednesday 31 August 2016

Linux RAID-1 [8]: Setting up the replacement array

As I mentioned last time I have bought the 2 TB disks so that I can have a RAID-1 array in the main computer that is big enough to hold all of my stuff, meaning the second computer (and by extension, its screens and desk) will be effectively redundant. So here I am essentially repeating the steps of the earlier instruction sequence of the previous articles in this series.

After backing up my stuff, I shut the computer down, found the faulty disk from the 1 TB array and removed it from the PC. The other half of the array was left in the computer and reconnected while the two new disks were put into the other bays. With everything plugged in the computer was turned back on again and should have come back up straight off but it was not so easy as the 1 TB disk was now plugged into a different port, so a mdadm --assemble --scan command sequence was needed to get the 1 TB array back online. After another reboot everything is now where it is supposed to be so /home is back going again, and the two new disks are ready to be made into a new array.

The new disks are /dev/sdb and /dev/sde this time, so first thing to do is to partition them, using gnome-disk-utility, with one partition on each disk. Then we issue this command to create the array:
sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb /dev/sde

After that we can see that /dev/md1 is online in GDU. Now have a look at /proc/mdstat and it tells us that /dev/md1 is resyncing and will be finished in about three hours. This does not preclude it from being used, however. So our next step is to format the 2 TB partition, mount it and then rsync the existing home drive to it. At the same time we can start to move data across from the 2nd computer, but before we do that, the 2nd computer needs to have its own local backup, so that is also started with another rsync.

Next is to save the details of the array for mdadm 
mdamd --detail --scan >> /etc/mdadm/mdadm.conf

I then rebooted and on running a blkid see the new array renamed to md127. Then we put it into /etc/fstab as the mount point for /home, and change the old disk to /oldhome. So the fstab first looks like
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
UUID=f49e5734-6c56-4f17-81b1-6423e4045e75 /               ext4    errors=remount-ro 0       1
UUID=cd9d465e-5574-4afd-85a6-851ace3959b7 none            swap    sw              0       0

UUID=6f1ea562-fb78-44ea-9262-d4234886064d /home           ext4    defaults        0       2

and now it will look like

UUID=f49e5734-6c56-4f17-81b1-6423e4045e75 /               ext4    errors=remount-ro 0       1
UUID=cd9d465e-5574-4afd-85a6-851ace3959b7 none            swap    sw              0       0
UUID=6f1ea562-fb78-44ea-9262-d4234886064d /oldhome        ext4    defaults        0       2

UUID=dfc4fa75-63e1-49ad-9a82-d9d0b826db3f /home           ext4    defaults        0       2

Now reboot is the simplest step to follow next. On coming back up after the reboot, everything is mounted correctly and some simple checks show that stuff is where it's meant to be.

In this case the final step is to move - over the network - the files from the 2nd computer. Leaving it only with the VirtualBox and Google Drive folders in its home drive for when I do my study in there. At some point in the future I will remove the md0 RAID array and its disk from the system.

The relocation of data from both sources was completed during the day and during the next night all of the data from the old 1 TB array (oldhome) was again copied into the 2 TB disks, this time as a move rather than a copy, in order to clean up the 1 TB array, the new array having proved its operational capability. Actually this is what I should have done the first time instead of rsyncing from oldhome to home. The hidden folders (dot folders) on oldhome will get moved into a backup folder rather than pasted over into home as a lot of contents will have changed unlike the visible folders.







Saturday 27 August 2016

Linux RAID-1 [7]: Recovery and fault-finding

/dev/md0:
        Version : 1.2
  Creation Time : Sat Jul  2 04:03:31 2016
     Raid Level : raid1
     Array Size : 976631360 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976631360 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Aug 27 15:49:27 2016
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : patrick-H97-D3H:0
           UUID : fb59dadf:72ab4a7e:f821e41a:daa5988b
         Events : 524054

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       48        1      active sync   /dev/sdd

As you can see from the above, the RAID-1 array on my main computer has problems, with one of the disks falling out of the array. Hence in the table at the bottom, where the two devices in the array are listed, one is shown as removed, and the overall state of the array shows as "degraded".

The cause is shown in this extract from the kernel log:

Aug 27 11:05:09 MainPC kernel: [25592.706805] ata3: link is slow to respond, please be patient (ready=0)
Aug 27 11:05:12 MainPC kernel: [25595.794847] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Aug 27 11:05:12 MainPC kernel: [25595.801763] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x100)
Aug 27 11:05:12 MainPC kernel: [25595.801765] ata3.00: revalidation failed (errno=-5)
Aug 27 11:05:21 MainPC kernel: [25604.206942] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Aug 27 11:05:21 MainPC kernel: [25604.213541] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x100)
Aug 27 11:05:21 MainPC kernel: [25604.213543] ata3.00: revalidation failed (errno=-5)
Aug 27 11:05:21 MainPC kernel: [25604.213556] ata3: limiting SATA link speed to 3.0 Gbps
Aug 27 11:05:29 MainPC kernel: [25612.239030] ata3: SATA link down (SStatus 0 SControl 320)
Aug 27 11:05:29 MainPC kernel: [25612.239034] ata3.00: link offline, clearing class 1 to NONE
Aug 27 11:05:29 MainPC kernel: [25612.239037] ata3.00: disabled
Aug 27 11:05:29 MainPC kernel: [25612.239048] sd 2:0:0:0: rejecting I/O to offline device
Aug 27 11:05:29 MainPC kernel: [25612.239051] sd 2:0:0:0: killing request
Aug 27 11:05:29 MainPC kernel: [25612.239056] ata3.00: detaching (SCSI 2:0:0:0)
Aug 27 11:05:29 MainPC kernel: [25612.239183] sd 2:0:0:0: [sdb] Start/Stop Unit failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Aug 27 11:05:29 MainPC kernel: [25612.242814] sd 2:0:0:0: [sdb] Synchronizing SCSI cache
Aug 27 11:05:29 MainPC kernel: [25612.242840] sd 2:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Aug 27 11:05:29 MainPC kernel: [25612.242841] sd 2:0:0:0: [sdb] Stopping disk
Aug 27 11:05:29 MainPC kernel: [25612.242847] sd 2:0:0:0: [sdb] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Aug 27 11:05:29 MainPC kernel: [25612.251453] ata3: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen

Clearly the disk is identified as /dev/sdb, which was the disk missing from the mdadm output. There were plenty of other similar messages in the kernel log from later in the boot cycle, and the physical aspect apparent as the computer was coming one, during which time I was seeing these messages scrolling up the screen, was that disk would keep spinning up and then stopping again and resetting. The messages come up on the screen because I have textmode set in grub due to the need to have textmode startups with this computer after first installing it to facilitate swapping from the broken Nouveau drivers to NVidia for the graphics card. 

The sequence of kernel messages finally ends at 11:06:53 when the disk apparently came back online so it took nearly 2 minutes to get the disk working and meantime even though the SMART tests look OK mdadm has decided to drop it out of the array.  These disks are a pair of WD Caviar Black 1 TBs that I purchased four years and five days ago. Obviously a consideration is whether to replace them with 2 TB units as I am currently using two computers to store data each in a 1 TB array because only I ran out of disk space on the 1 TB array some time ago and decided splitting the data across two computers was a cheaper option, as some old spare 1 TB disks were available for the second computer.

I have decided to replace both disks in the array with 2 TB disks, a long deferred decision as mentioned above. This will let me bring the array up to 2 TB size and therefore allow the other computer to be eliminated as an extra storage location, because I am looking ahead to a day when having two separate computers in two rooms of the house takes up too much space and in fact, having four computers in two rooms as is actually the case, is extravagant. So the idea then would be to have just one computer desk with two computers attached to it.

Obviously there are usual lessons about backups to be learned. In this case I had a recent backup that was done when Xubuntu was put onto the computer. I also have a regular series of backups with some removable disks and the use of the RAID array is in itself a backup technique. But if there was a fire here I could still lose a month's worth of data, currently none of those backup disks are out of the house so I have to get back to taking at least one of them offsite regularly again. However at the moment not many photos are being taken and most other stuff like maps and study are backed up on Google Drive, so I could probably manage. Full rsync backups are what I mainly do at the moment because I haven't worked out anything else.

mdadm and some other systems exist for monitoring RAID arrays but I haven't used any monitoring tool to date.

Well I ran two SMART tests on the disk using smartctl and the short one was find and the long one came back as a read failure. In fact gnome-disk-utility, the GUI we all use to look at disks in distros like Ubuntu, has now updated itself to show the disk as failed. So therefore I am working for sure on the basis that as of now I need to make some backups and replace this disk because it really is stuffed. New disks will arrive next week and new array will get put in soon as after that. So I make a backup to a removable drive in the caddy of /home and then redirect to it, then take out both the old disks, put the new ones in, copy /home back to them and then redirect back to them. 

Sunday 21 August 2016

Virtualisation and Qgis

UPDATE: The below comment does not make a lot of sense as I have since used Virtualbox VMs very extensively to run Qgis and edit maps. Right now I don't have need to do this, but in the past it has certainly been a major part of the Qgis work that I have done.

(Unfortunately after further testing the use of VMs, at least under VirtualBox, to run different Qgis versions, has been found to be unsatisfactory. It is easy to see from a technical viewpoint how this would be possible in an environment where video, keyboard and mouse environments are emulated, rather than native. 

The solution that has been arrived at for now is to use the VM to edit project properties like the rule based styles in an older version of Qgis, while continuing to use the development version for everything else. However should the development version have too many issues to make this work I would be forced to revert to the current release version, since in Linux it appears to be much more difficult to have multiple versions on the same computer simultaneously)

One of the perils of running a beta version of software is that the bugs may make it unusable. This has happened from time to time with Qgis but it has never been as much of a problem as it is with the current version 2.17-master. A bug has been found that freezes Qgis when attempting to edit rule-based styles in data layers. Since all of my layers use rule-based styles (there is a "type" field in each record which is used in all the rules) obviously I have a problem with this version.

For now I have set up two VMs to use the current release version of Qgis. One of these is running the same version of Xubuntu as the host and the other is running Windows 10. Since the host now has 24 GB of RAM (3/4 of the maximum of 32 GB the board supports) it is easily possible to run one or even both of these VMs at the same time. I won't actually need to do this but I did run the Xubuntu one and set up the Win10 one at the same time.

Here you can see the Xubuntu VM running Qgis 2.16. The Maps folder on the host has been shared and after a fashion I was able to work out how to permanently mount this to a Maps folder in the home folder of the user on the VM. So when I save stuff on the VM, it goes back into the right folder on the host (using network shares as the shared folder capability in VirtualBox ran into permissions problems).

And here we have Qgis running on the Windows 10 VM. Again using a mapped network drive, this time drive M mapped to the share. As you can see a key difference is that the Windows 10 guest additions don't support full automatic resizing of the VM's display, which becomes a window centred inside the VirtualBox window. 

So at least I can continue working with Qgis (probably preferring the Xubuntu VM, but may need the Win10 one to run IrfanView when needed) until the bug in 2.17 is addressed.

Both these VMs are using the host's folders to access the map content. This also means the overGrive app running on the host will synchronise changes to the NZ Rail Maps Google Drive.

The third VirtualBox VM I have running on this machine is a Windows 7 one. One of the reasons for having it is to give me access to a different Google Drive - this being the one linked to my main email account - which in this case is used for a whole lot of personal stuff. So for example my budget spreadsheet and all of my study materials are on that one. There is also software I need for those purposes which is not available on Linux.

I also had a play with VirtualBox on one of my work computers which has a Wolfdale Celeron E3300 dual core and only 4 GB of RAM. This particular CPU was specially selected at the time of purchase because it supports VT-x (and 64 bit) so it just manages to meet the requirement for hardware assisted virtualisation. The computer is 6 years old and long since ceased to be of any use at home because the 4 GB memory limitation, driven by having only 2 memory slots and unavailability of DIMMs with more than 2 GB on them, means it runs out of steam pretty quickly. Still, with Xubuntu on it which has lower memory footprint than a lot of other Ubuntu based distros, it has managed to set up and run a Windows 10 VM in 1 GB of memory. This has been pretty slow work, but it looks like it should be able to achieve what is needed - and it's possible I might be able to increase its memory allocation if it can run some of the tasks I currently put on my Xubuntu desktop. It worked surprisingly well with just 1 GB allocated and the host also worked very well with a number of other resource-intensive applications such as Google Earth and Thunderbird running at the same time. It so happens that if I put the VM up to full screen then it is able to adjust the resolution to 1920x1080, but on my home computer where 1680x1050 is the screen resolution, that VM can't be made to go up to anything higher than 1152x864, so it would seem the Vbox Guest Additions have limited adaptability to different host screen sizes.

Subsequent to this I discovered issues in the Xubuntu VM that were peculiar to it, that were not seen in the Win10 VM, so I switched to the Win10 VM and then investigated whether the screen resolution could be bumped up. It turns out this is possible with the VBoxManage command on the host. Specifically the command VBoxManage setextradata "VM Name" CustomVideoMode1 1680x1050x32 did the trick, and after restarting the VM, the new fullscreen resolution appeared on the display settings. VBoxManage is basically the command line tool for changing all the VM's settings, although it obviously appears to be a superset of what is available in the GUI. 

Thursday 18 August 2016

The Linux systemd debacle

If you are reasonably familiar with the Linux ecosystem you will be aware that there has been a big rift over a process called "systemd". This particular process, or processes, are concerned with Linux system and session management. The issue has been for many that systemd steps beyond its role of being the "init" process and its creators wish to extend its operation into an increasingly greater and more invasive aspect of taking over more and more of the system management processes.

My conviction is that the developers of "systemd" are trying to make their project bigger than anyone else's, including the kernel project which they don't lead or control. Rightly or wrongly, it is seen in terms of their own egos, and also the implied desire of their employers, Red Hat, to have a strong influence in the direction of where Linux will go in the future, to, presumably, the corporate and financial benefit of their business interests.

The Linux community has as such become strongly polarised between those who see the implied dominance of Red Hat as the issue it undoubtedly is and those who are less concerned with the overall implications of that. There is now a specific fork of Debian concerned with ensuring systemd remains completely optional, following a bitter debate and split over the issue in their community.

As an end user I am somewhat concerned we will have limited choice as I install distros that have transitioned to systemd and as I am not willing to switch to a distro that specifically excludes it, for the sake of doing that. Unfortunately due to software compatibility issues and my own needs and expectations of a distro, I will have very little choice in doing that. 

It is quite right that the Devuan admins and others have highlighted the risks of allowing a for-profit corporation to dominate the Linux platform because of where Linux has come from and how much difficulty could be caused in the future, that everyone has already come to Linux to escape from. That makes it easy to understand the heated debate and divide in the Debian community particularly.


Monday 15 August 2016

Enable hibernation in Xubuntu

Hibernation is a somewhat controversial subject in the Linux desktop community. It is considered to be difficult to implement and to have the potential to cause a lot of problems. I have used hibernation wherever possible on all my Linux desktops but it is not always straightforward. Older hardware may not support it, and even where the hardware is modern, changing a card could be enough to spike hibernation and require a reinstallation to fix.

Here are the instructions for changing the settings to enable hibernation on Xubuntu. These have been verified as applicable to 16.04 LTS.

Hibernation uses the swap partition to store the memory contents of the computer. You should ensure the swap partition is at least twice the size of the computer's RAM.

My experience has been that older computers such as my dedicated work computers, which are both of advanced age, will not hibernate reliably in any OS. However, Xubuntu hibernates well on the four year old Ivy Bridge and two year old Haswell Refresh computers.

Saturday 13 August 2016

Installing Chrome on Ubuntu

What's interesting about Chrome is the different icons. On Mint computers and Lubuntus the icon has been square, while on Xubuntu we are back to the old round icon. This may be because of the different installation method I have used.

In general it is better to use a package supplied with your distro rather than one you download directly from a website. There are various reasons for this, but key among them is that it can include an update system that will cause it to be updated automatically. Usually this is done by having a package repository added to apt's sources.list file, so that new package versions are detected with an apt update command.

Opera's downloadable browser package is an example of incorporating this capability within the install. The Chrome and Google Earth packages also have this capability. On the other hand, Firefox Developer from their website does not give you that, as it does not have an install script provided. As such, it means Aurora cannot self update. Use ubuntu-make instead to install Aurora.

The method for installing Chrome as found on the Internet is as follows:

Install the signing key for the Chrome archive using this command

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

Download the Chrome package (google-chrome-beta and google-chrome-unstable are also options)

wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
Install this package using dpkg -i

If you get errors about missing dependencies then enter apt -f install which fixes them and automatically completes the install of Chrome.


Friday 12 August 2016

Xubuntu vs Lubuntu

Well of course after playing with Xubuntu at home it didn't take much to have it installed on my work computer, a very old Wolfdale with a Celeron CPU and 4 GB of RAM. A very slow computer for its age, and also relatively old. Not sure exactly but I would guess around 6 years. This has been running Lubuntu up to now. The result of this test showed there is not really a great deal of difference performance wise, at least for the apps I use, between these two variants of Ubuntu. I understand that the memory footprint of Lubuntu itself may be slightly less. 

Nevertheless, the window manager in Xubuntu, which is not OpenBox, gives a much cleaner appearance to the operating system, a highly desirable characteristic for someone like me, who finds the interface improvements make for a very clean and polished appearance. The further advantage is that it is not difficult to install sophisticated apps like Google Earth. Therefore it is more of an all round system than either Mint, which is too resource hungry for old computers, or Lubuntu, which has too many components missing to make it easy to install power applications.


Thursday 11 August 2016

Xubuntu vs Mint

Today I decided I needed something other than Mint on one of my systems. I chose to do this because Mint has had security issues that led to their website being hacked and giving out malware, and their forum database passwords stolen. Which brings down the whole reputation of their project, unfortunately.

There were only ever going to be two possible alternatives to Mint and they are Debian and Ubuntu. I had a play with Debian, which provides Cinnamon within the distro as a supported desktop environment. Although the current Debian release has quite an old version, 2.2, you can switch to the unstable repository to get it to update to 3.0. The main issue with Debian is under the hood. Software packages I use all the time are unavailable from the official repositories because of their dislike of the licensing. They even went so far as to rebrand both Firefox and Thunderbird on the grounds that the logos are copyrighted by the Mozilla Foundation, and the resulting Icedove mail application wouldn't recognise my Thunderbird profile. Is that a big deal? Well yes, if it saves you having to reinstall all your email accounts and download all the mail all over again.

So what looked quite promising ended up being a waste of time as while I can hack installs as well as a lot of expert users, the lack of support for many things that Ubuntu/Mint have catered for was going to be a really big issue making it nearly impossible to have a smooth running system and a painless transition from Mint.

So then I had a live preview of Ubuntu followed by Xubuntu, and quickly chose the latter because its desktop environment Xfce is as good as if not better than Cinnamon. And the biggie of course is it can install Ubuntu packages, which comes down to much wider support in some areas than Debian. Ubuntu itself is limited in that Cinnamon is not officially available on it, so the way seemed clear to break with Cinnamon completely. As it happens, Xfce bears more than a passing resemblance to Cinnamon.

We shouldn't knock Debian of course as it is running under Ubuntu and all its flavours - just that it is more of a system that favours expert users, and has some limitations. Which is why building Ubuntu on top of it has the best all round outcome.

So LoungePC is getting another makeover, the third in the same week. The installer is almost identical to Mint's, both of which are somewhat better than Ubuntu's. The RAID-1 got brought back up fairly quickly and I am just finishing off some installs. 

I guess eventually the map computer will also be migrated to Xubuntu but there is no real hurry for that.

Wednesday 10 August 2016

Linux Kiosk computer with Chrome Browser [3]

So having built our image and got it going pretty much as we please, the next thing you will want is to be able to clone from one computer to another.

The default Linux install can be a single partition for the OS. and a swap partition. Since the OS is all in one partition, you can just copy that partition to each computer you want to install on. The main issue is that you aren't copying the Master Boot Record, so the computer won't just be able to boot directly after installing the partition. You also need to create a swap partition.

After looking at a few different tools that I could use to do the cloning, for a manual clone in which I copy the OS partition, set up a swap partition and set up the bootloader, the Gparted Live CD is the one to go for in this case. Gparted is a partitioning tool, but you can also use it to copy partitions between disks. You can write the iso image they supply to a pen drive and boot from that pen drive, then you can copy the partition from your source machine to another pen drive. Then using the same two pen drives, you can restore the image to another machine.

The two additional things you need to do, which can be done from the same Gparted live CD session:
  • Create a swap partition. Make sure you turn swapon in the right click menu for this partition after creating it. I used 1-2 GB as a swap partition (my computers generally have 256 MB of RAM)
  • Reinstall the boot loader (GRUB). The steps for this are as follows
    • Find out in the Gparted main window where the bootable (OS) partition you copied to the disk is e.g. /dev/sda1
    • Go back to the main menu of the Gparted live CD and select Terminal
    • Type in the following commands:
    • sudo mount /dev/sda1 /mnt
    • sudo mount --bind /dev /mnt/dev
    • sudo mount --bind /proc /mnt/proc
    • sudo mount --bind /sys /mnt/sys
    • sudo chroot /mnt
    • grub-install /dev/sda
  • Then reboot and away you go.

I guess the Grub install steps could be scripted.

The one thing possibly I may do with these computers is to test Ofris (a deep freeze tool) to see if they can be made completely bombproof. And I have made a simple website that will come up as the default in the web browser when you start them up. So there we have it, they just have to be good for 3 months of use. 

For remote maintenance you can just use SSH to log into them and check anything or maintain anything on them.

The last issue that I needed to resolve when  cloning was to address that the clones would have the network card coming up as eth1 instead of eth0, and as the settings which enable the network card would refer to eth0, which didn't exist in the computer, it would have no networking. With the help of the Ubuntu community I discovered that a file called /etc/udev/rules.d/70-persistent-net.rules will contain an entry that effectively locks the network card (eth0) to the MAC address of the original machine. The entries relating to this should be deleted before cloning the computer, but they are regenerated each time the computer boots, so you have to be sure that when you do your clone you have removed them before booting to the clone environment, or else directly edit this file on the source partition while in your cloning environment.

Monday 8 August 2016

Linux Kiosk computer with Chrome Browser [2]

So my steps followed are:

  1. Install Ubuntu Server 14.04.5 from a bootable pen drive
  2. Set my username to user with a password
  3. Enabled automatic updates
  4. Install OpenSSH server in the tasksel at the end
  5. Reboot
  6. Log or SSH into the system (latter from another system is preferred since you can copy and paste instructions from this web page)
  7. Install some packages:
    1. sudo apt install --no-install-recommends chromium-browser
    2. sudo apt install  --no-install-recommends xorg openbox pulseaudio
    3. sudo usermod -a -G audio $USER
  8. Edit the kiosk.sh file:
    1. sudo nano /opt/kiosk.sh
    2. Enter the following lines:
      #!/bin/bash

      xset -dpms
      xset s off
      openbox-session &
      start-pulseaudio-x11

      while true; do
      rm -rf ~/.{config,cache}/chromium/
      chromium-browser --incognito --disable-background-mode --disable-sync --start-maximized --no-first-run 'http://www.example.com
      done
    3. Ctrl-O to write the file then Ctrl-X to exit
  9. sudo chmod +x /opt/kiosk.sh to make the script executable
  10. Edit the kiosk.conf file:
    1. sudo nano /etc/init/kiosk.conf
    2. Enter the following lines:
      start on (filesystem and stopped udevtrigger)
      stop on runlevel [06]
      
      console output
      emits starting-x
      
      respawn
      
      exec sudo -u user startx /etc/X11/Xsession /opt/kiosk.sh --
      
      
    3. Ctrl-O to write the file then Ctrl-X to exit
  11. sudo dpkg-reconfigure x11-common and select "Anybody" can start the X server
  12. try sudo start kiosk for testing and away it should go (or reboot)
  13. Change grub configuration to hide startup messages:
    1. sudo nano /etc/default/grub
    2. Find GRUB_CMDLINE_LINUX_DEFAULT and put quiet splash inside the quotes.
    3. Ctrl-O to write the file then Ctrl-X to quit
    4. Run the command sudo update-grub
    5. Reboot to test
I am still looking at whether to clone the disk to copy it to another computer. It looks complex to do this, because we really want to be copying a whole disk (two partitions) instead of just a partition at a time.

Chromium policies are also a possible scenario as referred to in the previous post.

There is some OpenBox stuff we want to change, particularly the number of virtual desktops offered and the right click menu that we don't want. Only one virtual desktop and disable the popup etc.

To do this copy /etc/xdg/openbox/rc.xml to /home/user/.config/openbox and then use nano to edit the latter. Look for the <desktops> section to set the number of desktops to 0, look for <keybind>s that relate to switching desktops, and look for a <mousebind> for a right mouse click that relates to the root menu to get rid of the right click menu.

Linux Kiosk computer with Chrome Browser [1]

Here is an option for making your own "Chromebox" out of some old PC you have lying around. It lends itself well to educational scenarios with old computers that are too slow to run a modern operating system, but which can be perfectly satisfactory for the Chrome browser to run Google Apps etc.

I am going to build up a test system using these instructions and then perhaps clone it to see how easy it is to set up a bulk lot of computers for this scenario. It is certainly a more viable option to use Ubuntu instead of Chromium OS. The latter has limited hardware support and every time we have tested it on regular desktop computers we have run into some sort of issues with it. Consequently just using regular Ubuntu has a lot in its favour. In this case the server version because it is very easy to customise exactly what is installed and therefore limit it to just the bare minimum.

Therefore I started with downloading Ubuntu Server 32 bit 14.04 and making a bootable pen drive for my test computer. In practice the computers it is going to run on are HP DC7100 PCs with 256 MB of RAM and a HDD of 40 GB or less. These computers have only a 32 bit CPU and cannot run 64 bit OSs. With the ISO image I downloaded I used the USB Image Writer tool in Mint to put it on the pen drive for the test computer. The system was then booted and came up in a text mode installer. The disk was partitioned into 1 GB of swap and 9 GB of ext4 mounting to /. The server operating system was then installed with the exact same options specified in the article. It was then allowed to reboot and came up normally.

The main difference from the article is that since it was written, Google has dropped 32 bit support for Chrome for Linux, which means I had to switch to Chromium instead. The biggest problem with Chromium is lack of support for Adobe Flash. But we hope that Flash isn't a problem in Google Apps. And also I didn't want to use Kiosk mode. So I changed some flags. My command to start the browser in /opt/kiosk.sh looks like this instead:
  •   chromium-browser --incognito --disable-background-mode --disable-sync --start-maximized --no-first-run 'somewebsite'

incognito is necessary to stop the user settings (including being logged in) being saved when the browser is closed, so the user gets logged out automatically of the browser. It also wipes history etc. This is a relevant thing when the same computer is being used by multiple Google Apps users so that they don't get the history etc of the previous user. In our case with the use of web filtering, they will still be logged through the web filtering system so we have a record of their usage.

Once you have this thing set up fully as a "kiosk" you can use ssh to get into change settings and do stuff remotely.

Other things not mentioned in the original article:

  • Disable function keys that you would press to get to a terminal for logon (because you can always SSH to the computer remotely)
  • If you minimise the browser window then you will never see it again as there is no taskbar. But pressing Alt-Tab will bring it back.
  • Downloading and so forth onto the local computer needs to be disabled. In other words that rm -rf command needs to be extended to some other local directories that stuff could end up in, otherwise the disk space can fill up when you don't want it to.
  • Chromium policies may be useful to set in addition to the kiosk settings. See links below.
I will post a complete list of steps taken to set up the kiosk in my next post.



Saturday 6 August 2016

Linux RAID-1 [6]: Reinstalling with RAID

Well today I had to reinstall Mint on one of my computers, both of which have RAID-1 arrays. This time I installed Mint 18 onto the DB75EN. And of course I wanted to pick up my existing RAID array without having to build it from scratch. Initially I was going to build a test system to see if I could pick up the existing array, before doing it live, but it was easier just to back up all the data and go straight onto trying it on the live system. I used the previous posts I have written as guides because it was nearly all the same, but the steps are outlined in full below.

It goes without saying that the first thing you should do before reinstalling is, of course, to back up your existing data, which in this case is the contents of /home, which is the mount point of that RAID array in my case, as I am using a separate SSD as /. I am continuing with that setup, and therefore the structure of the disk and array is essentially unchanged.

There are lots of different ways of setting up RAID when reinstalling. Mint doesn't have the alternate installer that Ubuntu has that will enable the RAID to be setup during install. Setting up the RAID during install is particularly essential if you want to boot off it. Since boot RAID is not particularly critical on an average PC or server, using a separate non-RAID disk for the operating system and swap is what I have done on my RAID-equipped PCs and servers for quite a while. In my case I have chosen to install Mint like normal, and recover the RAID array after completing the install.

So my starting point is that the home drive will be in the SSD by default and I have to install mdadm and then assemble the array (rather than creating it), then change the mount point for /home to point to it, and everything should be sweet. As it turned out the assemble worked flawlessly and this is how you pick up an existing RAID array from a previous install. If /home is mounted onto your RAID array then setting it up is one of the very first steps you should do after installing Mint, before you configure anything or install any software, because you don't want stuff getting put into the current location of /home and then getting lost when you move it to the RAID-1 array.

Steps taken are as follows (precede these with sudo as necessary)

Install mdadm:
  • apt install mdadm
Tell mdadm to find the existing array:
  • mdadm --assemble --scan
See if mdadm has stored the array information in its conf file:
  • gedit /etc/mdadm/mdadm.conf
At this point I can see the details of my array in the configuration file so it looks like mdadm has located it. My next step is to open the Disks tool, find the array and mount it, which for now is in the /media path.

Now we have to look at what is needed for startup and that is the modules, so we use gedit to add the necessary lines in /etc/modules:
  •  gedit /etc/modules
raid1 is required -  (md might be required if it exists in your system)

raid1 is already loaded in memory, this step just makes sure it will load at boot time next time you restart the system.I chose to restart at this point to check everything would work OK, which it did.

Next, we have a look in mdadm's conf file to see if it has the details of our existing array in there:
  • cat /etc/mdadm/mdadm.conf
 And yes it does have it there. So this is all that is needed to get the RAID array going when the computer is started up.


Our final steps are what is needed to relocate /home back onto the RAID array and get it to mount automatically at startup. First of all we need to get the UUID of the partition on the RAID array. We use the following command to do this:
  • blkid
This is giving me the UUID for /dev/md0 in this case and now I need to edit /etc/fstab to put this in to the mounting, I use gedit for this.
  • gedit /etc/fstab
 and my line will look something like
UUID=581966fb-0d41-4dac-bef3-c6de6267ec9e /home           ext4    defaults        0       2 

Now I have to get rid of the existing home in / and make it OK for the new one.
  • mv /home /oldhome
  • mkdir /home
  • mount -a
simply moves the existing home directory to another location on the boot volume and then creates a new blank home directory to be used as the mount point for my RAID array. The mount command will then mount the RAID volume to /home by reading fstab.


This all checked out OK so now everything is back to where it was for the home drive on the RAID array. Since my home drive wasn't wiped, then when I start up Thunderbird for example, it will pick up all of my existing configurations and email and doesn't require me to set it up again. This is even better than separating partitions on Windows because
  1. Windows doesn't officially support separate partitions for User folders, and quite often it won't update to a new version if you have this hack (set in the registry) in place.
  2. You can't simply use a previous user profile because Windows will get all upset and create a new blank one. Therefore for practical reasons it is easier to backup the existing user profile. But then the registry is wiped and not only do you have to reinstall each application, but you also have to install their settings. With Mint, you only have to reinstall the applications; because it doesn't use a central registry, configuration data is stored in the user's home folder, and it can pick that data up right where it left off, in most cases.
So that completes reinstalling Mint with a RAID array. Only one computer has Mint 18 so far. There is an issue with 18 that causes problems with Google Earth so the map making computer is not getting an upgrade just yet. That computer will, however, soon have 24 GB of RAM installed to make it faster with all the map editing stuff running on it as well as a virtual machine running Windows at the same time.

One of the objectives for reinstalling was to get hibernation working again after having it stop working after I had changed the video card when I swapped cards with the H97-D3H. There was probably some other complex process I could have followed but since there is a new version of Mint out and since it is so much quicker to reinstall compared to Windows then that was the solution I chose, and hibernation is working again. The flipside of faster reinstallation of the OS and software compared to Windows is having to do stuff like reconnect software RAID arrays. Although you have to do this on Windows as well, which generally involves running the Computer Management administrative tool and then opening Disk Management and then generally importing a foreign RAID array, it is all in a GUI.

DEBIAN NOTE: When I install Debian it usually installs mdadm during the installation so it can present the RAID array in the partitioner. This means a lot of the above steps are unnecessary. All I normally have to do with Debian are put the raid1 module into /etc/modules, and set up the mount in /etc/fstab and move the old home directory.

Thursday 4 August 2016

Minties

After much thought I decided the "second" computer (which is actually the newest and therefore fastest) will be used for the maps that I draw and the "first" computer (older) is the one I use for my study and some work as well. So the second computer now has the quad head video card and three screens on it and the most memory (currently 16 GB but it's going to be increased to 24 GB as it is capable of having 32 GB installed). So finally I have worked out how best to use the available computing resources I have in the best possible way. 2nd computer has also had a keyboard/mouse slide installed into the desk that it sits on and the desk has been moved to make it easier to use.

The allocation between the two computers is driven by having a suitable study location where there is less road noise in the house and also making sure the computers are both being used. It helps that the 2nd location can be heated effectively with just an 800 watt radiant heater, often at only 400 watts, otherwise the 1st location would have to be used for everything in order to save power in winter.

Since I was mostly using one computer for everything the resources needed for that one computer made no sense to all be concentrated on one computer when the second one is available. The second reason is having the scanner and printer in the same room which also means I only need one Windows PC in the house, as described below.

The bedroom computer is running Mint 17.3 and will be staying on it for now, while the lounge computer is being updated to Mint 18. A key requirement for any reinstallation is picking up the RAID arrays that I use for the home partition hopefully without having to rebuild them from scratch. Here I am assuming MDADM can assemble the existing array back together so that Mint recognises it. Rebuilding would require everything to be taken off the disks because they would get wiped. Since the scanner has been moved to the lounge, it can connect to the Windows computer in there which means the bedroom doesn't need a Windows computer alongside the Mint PC so I can go back down to three computers in total.