Monday 2 December 2019

Debugging Linux startup issues with JournalCtl

My computers running Linux are very reliable but every so often a spanner will be thrown in the works and the computer will refuse to come up beyond a command prompt. When this happens you have to look at the logs to try to work out what is happening in the system and how to get it going again.

This happened to me a couple of days ago when I turned on the computer and instead of resuming after hibernation, it threw up a number of error messages and then asked for the root password for maintenance (or there was the option to press Ctrl-D to continue). The Ctrl-D option simply put the computer into a loop where it ran a FSCK only of the boot drive, and then failed to resolve any issues, and came back to the same message. Or the option was there to enter the password which took it to a terminal to see what could be done.

Since the advent of systemd, its companion journalctl (with the -xb parameter) is what is recommended to look through to try to determine what the problem is. This took quite a while and it wasn't particularly obvious what was actually happening. The main issue I noticed immediately was no networking; the wired network adapter in the computer appeared not to be functioning, indicating that part of the startup of getting the adapter up and running had not happened. I tried a whole lot of things but eventually concluded I would have to try a live CD to see if this was a hardware issue or if it was something with a driver. 

The network turned out to be a red herring once it was realised that there was no home path in the file tree (i.e. /home was completely missing) and fairly soon running fsck over /dev/md1 revealed some errors. Once these had been fixed, rebooting the system brought up KDE normally and suddenly everything was working again like usual.

It isn't the first time this has occurred and I think last time we just threw in the towel and reinstalled. At the time of trying to resolve this issue I was downloading netinst and live images to burn to a pen drive and try to reinstall, but finding this relatively simple solution sorted the issue without having to take that complex, lengthy and very inconvenient route of putting everything back together. And what seems an obvious issue may actually not be the problem at all. The network adapter not working turned out to be a non issue as soon as the disk issue was taken care of and a reboot initiated.

Monday 18 November 2019

NZ Rail Maps: Large Area Raster Tiles At Reduced Quality Setting

Using Qgis to draw the maps, we've run against a few times some sort of limitation on the number of raster layers that it seems to have, which runs out a long time before hitting the actual resource limits of the computer it's running on. This in turn limits the number of raster layers that can be included in a project which has created quite a few challenges in creating map volumes. So a few times we have looked at ways of making bigger raster layers to reduce the physical number of them used. At the same time we have had a play with JPEG quality, reducing it from 100 to 50, which drastically cuts file size. 

For example, over Lyttelton, we have a mosaic with a resolution of 57600x28800 (1,658,880,000 pixels in total) and there are in total 48 tiles (12x4) at 4800x7200 resolution. The 48 tiles downloaded from LINZ are 600 MiB in total. For the 1967 era layers we produced a total of 32 exported tiles which also reached about 600 MiB in total at quality of 100. However the large area tile that we produced for this article covering the entire 48 tile space with quality 50 only uses 156 MB which is quite a lot smaller overall, for only an imperceptible loss of quality. We may even try lower quality settings but for now, 50 is where we will leave this, as it will have to be tested over a wide range of mosaics.

Creating the full size mosaics is also much faster as there is only one tile output for each era and this means we will be doing all the exports for Greater Christchurch Maps in future at full size for each mosaic project in order to speed up the exporting process. Some additional work may be needed to make the areas covered be full rectangles without gaps as these will generally be exported as black areas that we don't need to be in large tiles. As it happens, with the Lyttelton examples, a number of large tiles will have the Linz 2015 contemporary areas included around the edges by default as the historical areas often do not cover the full canvas.

The main challenges are where a base area is not a perfect rectangle, meaning it will have to be broken down into two or more areas that are, and the time that it takes Qgis to refresh its canvas with the larger layer size. At the moment these issues are not too challenging and we expect them to be able to be worked around satisfactorily. Making the sidecar files is extremely straightforward as all that is needed is to copy the sidecars for the top left tile of the original tile grid and rename it to suit the new file name specification, due to the fact that the only coordinates in the world file are at the top left corner and the software assumes it is just meant to keep drawing the tile until all pixels have been rendered.

We have also observed that Qgis doesn't seem to have an issue loading raster tiles from a WMTS server in our projects that also have local raster layers, so the option is there of perhaps working out how to have a WMTS server on our local network to serve the raster tiles, instead of being file based, but also to query the difference in architecture of the software, as it is evidently handling WMTS layers in a better way than local file based layers.

Change Monitor Configurations Permanently in LXQt for Screen Recording

As we have discussed in the past, I use LXQt in two different configurations with my various computers. Lubuntu, which now uses LXQt as its desktop environment, is what most of my non-desktop computers use. For example a media playback computer and a laptop are normally set up with Lubuntu which is quick and easy to install as it has all the drivers included in the setup.

For my desktops I am using Debian, with LXQt on one computer in particular because this computer needs stuff like hibernation that is better supported in Debian, but also with LXQt it gets around some of the bugs in KDE which I use on other desktops, as well as ensuring the resources that KDE uses aren't wasted in its lower spec. So I am testing these settings on that desktop because it has two screens and it is in fact the main computer that I have at the moment that these settings are important for.

Because I use this computer to do screen recording (using SimpleScreenRecorder), the screens have to be nearly at diagonal placement to each other. That is, the left hand screen and the right hand screen are to the left and right of each other, but in a vertical sense there is only a tiny overlap at the top right corner of the left screen and the bottom left corner of the right screen. This means the mouse can only be moved from one screen to the other by passing through a very small area of overlap that is in the top right corner of the left screen and the bottom left corner of the right screen, and also windows needing to be dragged from one screen to the other can be passed through this small area.

The reason for this configuration is that for screen recording, we have the software that is doing the recording running on the left (control) screen, recording the right (target) screen. We want to ensure what we need to do on the left screen does not cause anything unwanted to be displayed on the right screen. Some examples of this are:
  • accidentally moving the mouse from the control screen onto the target screen
  • any action on the control screen which causes pop up notifications, such as operating keyboard volume up and down. Even if the popup is on the control screen, if it is adjacent to an area of the target screen it can cause unwanted display of information on the target screen.
We ensure this by making sure as described above, minimum overlap between the two screens, just enough to get the mouse from one to the other and drag windows from one to the other. We still have to be careful not to run the control screen applications maximised to ensure the mouse is kept out of the top right corner of the control screen at all times other than actually going from one screen to the other.

The LXQt monitor settings GUI doesn't actually position the screens exactly as we would like to. They can be aligned vertically and horizontally close to the suggested outcome, but with a small amount of unwanted superimposition of the two screens. Fortunately you aren't limited to using that GUI to set positions. After changing the positions in the GUI and closing applying or saving settings, you can further tweak these with manual editing in the file ~/.config/lxqt/lxqt-config-monitor.conf. These seem to be just the same sort of parameter specs as running xrandr will spit out for you. Under the [currentConfig] section are the ones presently being used for the computer and with a few adjustments we can eliminate that superimposition and ensure the overlap is as small as possible. For example, with the control screen being screen 1 on HDMI-1 (settings\1) at a resolution of 1440x900, xPos=0 yPos=745 are the settings and for the target screen being screen 2 on HDMI-2 (settings\2) resolution 1360x768, xPos=1440 ypos=0

The names of the displays xrandr generates for this system are interesting as it has one HDMI port, one DVI port and one VGA port, yet the DVI port appears as an HDMI port in the list of outputs. It seems this initial configuration of the displays happens just after logon. Also, the settings are saved in the user's profile (~/.config folder) which means when reinstalling, these settings should be reused into the new installation of LXQt.


Using a Raspberry Pi as a livestream player [1]

When I wrote the first post of this series I fully expected using my former Windows 10 PC as a home theater PC would work out. That didn't last long and the Windows 10 part was soon moved to another computer so the Mini-ITX system could be reinstalled with Lubuntu for the specific role that it was needed for. That hasn't worked out either, in both cases the older low-spec hardware is the issue, not specifically because it is low-spec but because it is older. The AMD E350, while good for desktop use, is not recent enough to be able to decode modern video codecs fast enough to be able to give good video playback on Youtube and other live streams.

So as the updates to that post attest, I had moved to using the Antec chassis system with Lubuntu, but that has not worked out after about a month of use. The only reasonably cheap solution, therefore, is to move to using a Raspberry Pi for this application. Although my one and only Pi is being used at my desk for a particular application where it gets its internet connection off my phone's mobile data, I can use an old laptop for that particular purpose in the meantime while I use the Pi for the bedside PC application (for night prayer/intercession use), until I can get another Pi.

The model 3 B Pi is just slightly too underspec to keep up with all livestreams. It actually still does work very well with Youtube on 240p or lower, but at higher rates tends to drop out a little. That is OK for now, and far better than the E350. The Pi 4 which has just been released is available with 2 GB of onboard RAM at $85 for the board ($105 with 4 GB of RAM) and is much faster than the 3 B and has many enhancements such as dual mini-HDMI ports, USB C power , etc. To that price you have to add a case and power supply, which can be done in stages as funds permit. There is no real hurry on that as I can get by for now with the 3 B, which is currently powered off a spare Ipad power adapter. I could continue a similar arrangement with the Pi 4 and save on the cost of a power adapter, but it will be a toss up whether to go with 2 GB or 4 GB of RAM, and also which distro to run on it if I want to use Firefox, which is apparently unavailable for Raspbian.

So the Antec chassis will go back to being my Windows 10 PC, which in turn can be moved out of a bulky spare desktop chassis again.

Friday 8 November 2019

Power Tools: Best brand of home power tools?

If you are working at the cheap end of power tools there are a few brands to choose from. I have bought Ryobi stuff a few times, weedeaters have been good, also a good 1200 watt electric drill (a knockoff of a Bosch or Hitachi drill) with variable speed, hammer, 2 speeds, reverse etc. I used this some years ago to drill a lot of holes in desks to run cables through and it did a great job. 

Then there is Makita, I have owned a few Makita products but not anything that I bought myself. Orbital sanders, mains powered and cordless drills have been some of their products that I have had, some of which are still in use today.

Then there is Bosch. Their home stuff has been quite good, from a range of cordless drills, saws, heat guns and sanders.

The problem seems to be these days that all three of these brands, plus Black and Decker, have gone mad and are now producing cheap rubbish that only lasts a short time. Well, B&D stuff was always cheap and nasty but now there is a race to the bottom. Some examples:
  • Mouse sanders (triangular detail sanders for sanding into corners): Bosch have two models that are of very poor quality, the base plate is made of foam onto which the sanding pad is held with velcro, the pad being a soft material disintegrates and is then expensive to replace. A ludicrously poor design with also the dust collector box being either difficult to remove and empty, or falling off all the time. 
  • 1/4 sheet detail sander: Makita have done a very poor job with their BO4555/4556 models. Numerous reviews worldwide attest that the paper clamps on these will break off after only a few uses, whilst there have also been a number of people who have found the whole base plate has broken off completely.
  • Any sander that uses Velcro fastening: there are too many of these where the hooks and loops wear down quite quickly and will no longer hold the sanding sheet into place. This also includes the sanding pads that can be bought for multitools.
Seeing these situations for these brands of tools which have previously been fairly well regarded is a big deal for me. It seems these manufacturers are just producing throwaway junk with a life that can be measured in hours.

These situations are hard to understand because I have here an older Bosch random orbital sander that has a velcro pad for attaching sandpaper and not only is the pad made of good quality material (hard rubber) but also the velcro works very well and holds the sheets in place properly and they don't fall off. Generally the older Bosch tools are well made and don't break. The Makita detail sander I have is well made and has given a lot of use (except for the dust bag which has a flimsy internal support that broke). 

So I have to look at some other choices:
  • Metabo (which now has taken over Hitachi)
  • Bosch Professional
  • Hikoki ?
  • DeWalt
Metabo is the only one of these that doesn't seem to be specifically geared at the professional trade (but I could be mistaken in that assumption). Their gear is going to cost twice as much as Bosch but about the same as Makita, yet it looks to be of better quality. They have a mouse sander that Mitre 10 carries in stock for about $175 that appears to be quite well made although reviews are hard to find.

Dewalt don't have a mouse sander but they do have a 1/4 sheet sander and also a random orbital sander which price around $175-225 in NZ and appear to be well made. Bosch have some professional products which can be hard to find here.

Mostly what I see in reviews for certain home user Bosch and Makita products is people questioning why these manufacturers now turn out cheap junk when they used to produce good quality stuff. At least everyone already thought Black and Decker was low end but it seems Bosch and Makita are falling over themselves to emulate B&D and see who can produce the cheapest flimsiest tools that don't last much longer than the ones you can buy from The Warehouse.

I don't need to have any more sanding gear at the moment so this is an interesting discussion / conversation for future reference.

Friday 1 November 2019

Python Scripting: Planned Scripting Projects Update

Back in February this year I set out a list of planned scripting projects following my acquisition of Python programming skills. The sequence of planned or implemented projects to date is:
  1. NZ Rail Maps project script to copy a set of GIS raster layer files based on reading a QLR file produced by Qgis. Currently used occasionally.
  2. Auto sync script for creating an audio-only clone of a collection of video files. Not yet started.
  3. A script to produce layer fractional segment sidecar files for GIS raster layers. Not used at present due to this feature not currently being used for mosaic tile generation.
  4. EXIF based image rename script for digital camera photos / movies. Used almost daily.
  5. I'm not sure what scripting project was going to have the number 5 because I went straight to 6 instead, and it's a mystery at the moment why I did this. It would have made sense at the time, because project 5 may not have been written down, or it appeared in one of the previous articles that I haven't re-read thoroughly enough to determine.
  6. Taking the script from (3) above and changing it into a straight duplication script for simply copying sidecars where layers are duplicated. This is currently used quite regularly when mosaic tiles are exported for use with Qgis.
It is now time to pick up item (2) and start work on it, this week hopefully.  I now have a situation where I need to be able to play back music from a phone and base it on automating the audio extraction from video files to produce a music-only tree clone of a video file collection.

OverGrive seems to work again...for now.

OverGrive is a Google Drive client for Linux. It has been around for several years. I used (and licensed) an earlier version of the product, which I was forced to stop using last year because it stopped working (this quite often happens when Google updates their API to a new version which requires software changes). At that point with no apparent communication or updates from the developer, I switched to the free "grive" package, which however is command line only and lacks the GUI configuration interface, system tray icon and other features of OverGrive. However that also stopped working earlier this year, probably for similar reasons.

It so happened I was reading a Linux website today and saw an article on Tech Republic showcasing OverGrive, so I decided to try reinstalling it, and was surprised to find that it works. The version listed on their website is still the same version as I had installed before. It seems to work OK on Buster despite this (the free grive package had to have specific versions for each release of Ubuntu, which was a trial and error installation onto Debian because of course, Ubuntu uses a different release schedule).

I currently have two licenses for two different versions, on two different computers. One of these is for the NZ Rail Maps project and provides for a daily backup of the core project files and layers. It doesn't, of course, include all the aerial photo layers which would not fit into a 15 GB Google Drive cloud storage space. However at present these layers are stored on two different computers as well as being backed up on removable disk.

So with very minimal documentation on OverGrive it's hard to understand why it stopped working on my computer and whether there really has been an update to it. The homepage still links to the highly out of date GitHub and Launchpad repositories, without any explanation of why these areas have not been updated for several years. If this outfit actually still maintains their product and take an active interest in it, they haven't exactly made it obvious.

Wednesday 23 October 2019

Red/green/white handheld signalling lamp

Anyone who is associated with rail heritage knows that on the professional railways in New Zealand, train crews used hand held signalling lamps that could produce red, green and white light, for use when shunting a train. The older style lamps of this type worked with a mechanical rotating filter holder to move red and green filters in front of a torch bulb when a knob was turned on top of the lamp. I've often thought about how easy would it be to make an electronic version out of parts.

When I first started to get interested in electronics, LEDs were only available in red and green, and didn't put out much light. It has only been with advances in LED technology in recent years being able to invent a blue LED to make up the missing primary colour to produce white light, and being able to do it at a very high brightness, that it has been possible to easily make your own lamp that can put out a reasonable amount of light and thus be visible.

To make a handheld signalling lamp, the parts needed are basically some LEDs for each colour, batteries, switches and a case and perhaps a handle. You could use tri colour LEDs that are available and program the proportions of colours that are needed to produce red, green and white, by switching in resistors using a  multi pole rotary switch. You could also use separate red, green and white LEDs to produce the colours. My first thought is to go with the second option. Jaycar has suitable LEDs for about $4 each. I would go for the 10 mm size and possibly at least four of each colour, maybe more, or spread them out a bit. 

For switches it is a question of reasonably waterproof ones and maybe more than one. The most important colour you want to display is red, which means stop as this is the most significant safety action. The main problem with many IP rated switches is they are only SPST. Jaycar does have some momentary pushbuttons that are DPDT. These are important if you want to be sure only one colour can be illuminated at a time. There would be a toggle switch for on/off, and push buttons for green and white, wired (hopefully) so as to cut off power to other colours when one is pressed.

A case would have to be clear plastic, and there are a limited range of these from Jaycar and other suppliers. A power pack would be a battery holder to hold some AAs. 

Well that is the theory anyway and at the moment I'll leave this idea there and spend a bit of time looking in more detail into costs and so forth.

Rsync Backup System [4]: Linux File and Directory Security Using ACLs

As per our series on Rsync backups, we desire to use a different user from the one that owns the home directory in order to ensure they have only the permissions they need when logging in remotely. At this point whilst by default the backup user could access many things on the computer, there were some files that they received permission denied errors for.

Whilst there is a simple file and directory permission scheme built into Linux, the acl extensions are very worthwhile as they give a lot more control, especially with the all-important ability to have inheritable permissions on a parent folder. In this case we'd want to give the backup user read-only permissions on the home directory that are inherited by all subdirectories and files as they are created.

Getting ACL is pretty simple, firstly you need to install the acl package using apt. The second requirement is to see if a volume is mounted with acl support. Use a command called tune2fs -l /dev/xxx (part of e2fsprogs package) and look for "Default mount options". If it says acl in there then the volume will be mounted by default with ACLs enabled. If this is not a default then you'd want to go into the volume mount entry in /etc/fstab and you can add acl to the mount options there. In this case because in fstab I used "defaults" as the mount options, acl is automatically turned on at mount time. If I needed to specify acl then I could change the word "defaults" to "defaults,acl" in the fstab entry to enable ACLs.
ZFS also has its own (more extensive) ACL support which is not covered by this post as we are referring to the existing filesystems which are ext4 on the computers being backed up in this case. I am not planning to change the existing RAID arrays from MDADM to ZFS but it could be an interesting consideration for the future should a disk array be replaced in any system.

Now how to configure ACLs for our requirement? Commands used to set and get ACLs are setfacl and getfacl respectively. To set up read access for backupuser on /home/patrick we need to use a command that looks like this:
setfacl -dR -m u:backupuser:rx /home/patrick

If we then use a ls -l (or ls -al) command in /home/patrick we can see next to the standard permissions a + sign which tells us that ACLs are set. In this case I can see whilst all directories have ACLs set, the files that are in each directory are not set with ACLs. So I would need to run another command to set these:
setfacl -R -m u:backupuser:rx /home/patrick/* /home/patrick/.* /home/patrick/*.*
The two different commands are as follows:

The first one using 
-d setting the default ACL for each directory (the default ACL to be applied to any new file or directory)
-R operating recursively on all subdirectories
-m modifying the ACLs
u:backupuser:r is user backupuser granting read permission.

The second one is practically the same except there is no default applied because it is for existing files and the paths following it are the different filespecs to match. 

Whenever a new file or directory is created in future it will inherit the default ACLs that have been defined using the first command which means backupuser will get the permissions it needs. Multiple paths to apply the ACLs to can be specified but only one user specification can be put in at a time.

ACLs override the default permissions system when they come into effect and so you need to have good knowledge of how Linux permissions work. In particular you need to ensure your user has x (execute) as well as r (read) permissions specified in ACLs for directories in order to be able to traverse directories, otherwise you will keep getting permissions issues. It took me a bit of work to get the ACLs set up but once they are done properly, the advantage is that inheritable permissions are possible, so for my backups, the backup user will have automatic rights to new files and folders that are created in future. In fact for any computer the way to make it happen is to set a default ACL on /home that gives the backup user rx permissions on everything below in future.

With the ACLs properly set the test backup was able to access all the files it needed to and complete the backup. Since I knew there were roughly 114,000 items waiting for backup, it was interesting to watch rsync at work discovering new files (a big increase from the roughly 7000 it had access to the first time). 

The means of doing the incrementals is still being worked out as it is not as straightforward as I had hoped. The log file is not consistent in its format (at least not so far) that would make it easy to determine which files successfully transferred. The option I am tending towards would probably consist of getting a list of files that have changed since the last backup, then getting rsync to copy one at a time, looking at the result code that it returns, and if there is no error then writing an extended attribute of the source file to indicate a successful backup result. The extended attributes will probably be backup.full = <date> and backup.incr = <date> and these two extended attributes will be written by the appropriate backup script. A full backup simply updates the attribute at backup completion without checking it. Whereas for an incremental, we compare the last modified date in the file with the date stored (backup.incr if set, otherwise backup.full) and do the backup if the file has been modified since the last backup date. So I expect to start more testing reasonably soon with developing scripts to perform these tasks.

But so far as the full backups go, it's doing pretty good. I have just set up another full backup and it's working very well. The second time setting it all up was so much smoother. It also looks like ZFS disk compression works very well and has achieved some useful saving but I will check out exactly what sort of improvement I am getting on these disks.

Arlec Plug In Heater Controls [2]

In my previous post in this series I described the PC900 2 hour plugin countdown timer which Bunnings have been selling for some time. The oddity of this product is that it is described on the packaging as having "adjustable switching increments". This is a rather clumsy phrase more worthy of adjustable 24 hour timers and is referring to the fact the timer can be set for any time between 120 minutes and 1 minute each time it is used.

In the same post I referred also to the new THP401 plug in thermostat which Arlec's packaging describes as a "temperature controlled programmable timer". The problem with using the wrong terminology to describe a product is that it sends people up the garden path. The description as above is, once again, carried through into the product packaging. Someone at Arlec needs to improve their knowledge of the English language.

I happened to visit Bunnings again today and one of these was on the shelf at $29.95. The product is, very clearly not a "timer", but a plug in heater thermostat, and obviously a very useful thing to have. The problem is whether people will understand what it does with the wrong labelling.

Tuesday 22 October 2019

Rsync Backup System [3]: Using Rsync For Full Backup

So last time we talked about how to set up a single disk with ZFS to use compression. Having got our backup disks sorted, the next step is to work out how to use rsync to do the actual backups.

rsync is written by the same people that devised samba and is a very powerful piece of software. What we need to do is to first of all set up a dedicated backup user on the system that is to be backed up, that has only read access to the filesystem. Having this is extremely important in case we make a mistake and accidentally overwrite the files on the source filesystem (which is quite possible with such a powerful system).

Using useradd we can set up backupuser in this case with a very simple password consisting of four consecutive digits and then by default it will have read only access to other users' home drives.  This means it can read my home directory. backupuser in this case is used with SSH to access the home drive over the network for creating the backup.

The next step is to look at the required form of the rsync command. There are certainly a lot of options for this. Assuming the backup destination zpool has been mounted to /mnt/backup/fullbackup and the source is on 192.168.x.y at /home/patrick then a good starting point for the backup would be along the following lines:

rsync -arXvz --progress --delete backupuser@192.168.x.y:/home/patrick/ /mnt/backup/fullbackup/patrick --log-file=/home/patrick/rsync.log

This looks to cover all bases needed. The options are -a which means copy in archive mode (preserving all source file attributes. -r which means recurse into source directories, -v which means verbose and -z which means compress during transfers (which can speed things up). --progress means during each file transfer you will see the bytes transferred, speed, percentage and ETA which is useful for large files. --delete will remove extraneous files from destination directories (which means files that aren't in the source). This option is obviously useful when a file gets deleted or moved in the source. -X means to preserve extended file attributes if any exist.
At the moment I am testing this with a full backup of serverpc with this running in a terminal window. There are some issues with some directories not having permission for backupuser which so far has only affected a few hidden folders but will have to be looked into further. Previous backups always used patrick as the user but it is pretty important to have a special backup user with restricted permissions which is really a best practice for any kind of professional computer setup. An example is because a mistake with the rsync command could wipe out the source directory if there was read-write access to it.

After looking at log file options there is this option that we can set the log file format for rsync using an extra parameter and this is what I came up with in deciding what format of information would be useful in each line of the rsync log:

rsync -arXvz --progress --delete backupuser@192.168.x.y:/home/patrick/ /mnt/backup/fullbackup/patrick --log-file=/home/patrick/rsync.log --log-file-format "%a|%f|%M|%l|%b|%o|%U"

Using in particular the log file format option, it seems to be usefully logging the information we need for each file. However the %a logfile option does not actually seem to be recognised by rsync. 

There is one thing that rsync does not do by itself and that is incremental backups to a separate disk. rsync is designed to be able to by default create incremental backups in a separate directory from the one where the full backup is stored, but the full backup directory must be online at the time when the incremental is done so that it knows which files have changed. This is a problem when, as I intend, you want to be able to use separate disks for the incrementals. Here you see the fundamental issue with the Linux file system (at least ext4) that does not have the separate archive flag for a file that NTFS has. The argument goes that it isn't necessary but that archive bit is the way you can tell that a file has been modified since the backup last ran, and increment it. 

There are several possible solutions to this and one of them is to create an extended attribute for each source file. This is possible using the setfattr command. So we could create this on each of the source files following the full backup (it would have to be a separate script process following the execution of rsync). Maybe the script would be part of a verify process that we run after the backup is carried out, to verify that the source and destination files both exist, and then write the extended attribute to the source file. The issue is that it may well prove difficult to be sure the source file was backed up unless we can have a look at a log file and prove that it lists the source files and then feed that into a script that produces the extended attribute. Anyway, the point of this is to write an extended attribute to each source file that says when that file was last backed up. This could be a part of an incremental script that uses find to get all files that were last backed up since a certain date. Or we can just use a range of dates for the find command to get the incremental file list using the last modified file time. This will all be looked at in more detail when I start working on incrementals, because for now I am just doing a full backup like I did before with rdiff-backup.

Monday 21 October 2019

HOWTO: Set font size in virtual consoles

In Debian you can have virtual terminal consoles, which are different from a terminal emulator window. Before Buster came along, a lot of commands could be run in a terminal emulator, but these days, more often than not, commands have to be run in a virtual console instead. The difference is that a virtual console is fullscreen, not a window, and doesn't run under the GUI, which goes into the background at that point.

You can have six separate consoles that you switch between by pressing Ctrl-Alt-Fx where x is 1 to 6, i.e. the F1 to F6 keys. To switch back to the GUI you press Ctrl-Alt-F7.

The next issue is the font size which I tend to find rather too small. Fortunately this is easy to fix. Use nano to edit /etc/default/console-setup and change the following lines to read as follows:
CODESET="Lat15"
FONTFACE="TerminusBold"
FONTSIZE="16x32"

This gives us quite a large easy to read font. 

Run the command setupcon to apply it immediately, it should become the permanent setting.

The fonts are all stored in /usr/share/consolefonts directory so you can look up what is available if you want something different. What I haven't done yet is work out if I can change the colour from white.

Rsync Backup System [2]: Using ZFS For Compressed Backup Disks

So last time I talked briefly about ideas for using Rsync to do my backups. Over time this will gel into a whole lot of stuff like specific scripts and so on. Right now there will be a few different setup steps to go through. The first stage is to come up with a filesystem for backup disks, which are removable, and this time around I am having a look at ZFS.

Basing on some stuff I found on the web, the first step is to install zfs on the computer that does the backups. Firstly the contrib repository needs to be enabled in Debian, and then after that we need to install the following packages:

apt install dpkg-dev linux-headers-$(uname -r) linux-image-amd64
apt-get install zfs-dkms zfsutils-linux

After completing these steps I had to run modprobe zfs as well to get it running for systemd (this was flagged by an error message from systemd). This only works on the current running session. To ensure there are no "ZFS modules are not loaded" errors after rebooting we need to edit /etc/modules and add zfs to the list of modules to be loaded at startup.

Since Debian Buster we have to run a lot of commands in a virtual console by pressing Ctrl-Alt-F1, that previously could be run in a terminal window. This is certainly the case for the zfsutils commands so the following steps need to be run in such a console.

The next step after installation is to look at the steps needed to create a filesystem on a disk and set up compression on it. In this case the existing disk is called spcbkup1 and its ext4 partition currently set up on it appears as /dev/sdb1 which means the actual disk is /dev/sdb.

After deleting the existing filesystem from /dev/sdb the following command creates the storage pool:
zpool create -m /mnt/backup/spcbkup1 spcbkup1 /dev/sdb

whereby the new storage "pool" (just a single disk which is not really what ZFS is made for but it can be done) is set up to mount to /mnt/backup/spcbkup1, is called spcbkup1 and is using the physical disk /dev/sdb.

Issuing the blkid command at this point tells me I have two new partitions. Apart from /dev/sdb1 which is of type "zfs_member" there is also another partition called /dev/sdb9 for some other purpose known to zfs. /dev/sdb1 is mounted up at this point and can be used as a normal disk.

To turn on compression we then use this command:
zfs set compression=lz4 spcbkup1

There is one more question and that is automounting. When you put an entry into /etc/fstab then the default for mounting is automatic. For a removable disk we don't want this, so the entries in /etc/fstab end up looking like this:
UUID=....     /mnt/backup/spcbkup1 ext4 noauto 0 2
whereby noauto as the options means it will not automount. Instead you have to manually mount it before use. Obviously you do this after inserting the disk. This requires me to hibernate or turn off the computer before inserting or removing the disk.

ZFS is a little different. We use zfs set mountpoint=none spcbkup1 to remove the mount point before removing the disk. Then later on when we want to put the disk back in we would use zfs set  mountpoint=/mnt/backup/spcbkup1 spcbkup1 to reset the mountpoint. As we can change mountpoints on the fly without a config file or needing to know a long UUID, this immediately and obviously lends itself to being able to mount more than one disk to the same mountpoint. In other words, I can dispense with a different mountpoint for each of the disks and mount them all to the same path. So for this backup scheme I will have two mountpoints for spcbkup. One mountpoint is for the backup disk(s) for the full backup, and the other is for the backup disk(s) for the incremental backup. These will look like:
/mnt/backup/spcbkup-full
/mnt/backup/spcbkup-incr

where the suffix is self explanatory hopefully
and as long as I swap the incremental disks in and out then the backups proceed automagically.

For this scheme we just have one backup disk for the full backup and we are sharing one backup disk for incrementals for all sources so the paths will look like
/mnt/backup/fullbackup
/mnt/backup/incrbackup

and the two incremental disk pools will be called incrbackup-odd and incrbackup-even which refers to the week number. The backup script will work out the week number itself and work accordingly. The plan is each incremental disk will contain two generations of backups and with two disks, there will accordingly be four incremental generations at any one time. In practice we take the week number and get the modulus after dividing by 4, which will give a number from 0 to 3, then the backup for that week is done into a folder on the disk called mainpc-x or serverpc-x or mediapc-x where x is the modulus. rsync will be set to ensure that any redundant files are deleted from the target at each sync and the week number is also used to calculate the date range for the command so that it knows how to handle file modification times to identify the files that have changed within that date range. The script automatically runs the same day and same time each week for the incrementals. At about every three months a full backup is done on a different day from the scripted day so the script is not disturbed, this amounts with three computers to one full backup each month.

It is important to unmount using the above commands before removing the disk otherwise the system will behave badly at next startup (although the Raidon disk caddies are theoretically hotswappable, in practice I am using them as non-hotswap for various reasons, so the system is shutdown or hibernated before changing disks).

So having worked out how to set up disks, the next step is to work out how to use rsync and I will be running a full backup of serverpc first of all.

Rsync Backup System [1]: Introduction

In February of last year I was looking for a new backup solution (I had used rsync to that point but found some issues with it) and tried a few different things. I have been using rdiff-backups since then (it comes as a part of Debian) but this has its issues too.

At the moment I can't recall what the issues were with rsync and it's looking more attractive as a new option. I think the reason I couldn't make it work satisfactorily before was because I used it over a network (samba) share which it doesn't work well over. But Linux has SSH as a much better option for network syncing which rsync has built in support for and should work a lot better than over a Samba network share. 

The reason I want to ditch rdiff-backup is that I want a system that will do incremental backups by copying the whole files and on a different disk volume. rdiff-backups works on the basis that you are backing up both full and incremental backups to the same volume and an incremental will only be the actual changed parts of files. Unfortunately this creates an incremental that is dependent on the full backup; if you lose the full version of the file then the incremental is of no use to you since you only have the file fragments that have changed, not the whole files that have changed.

The reason I chose rdiff-backup in the first place (over some options like borg, which I could not make work easily) was that it would copy a file structure (directories and files) instead of storing the files and folders inside an archive file. If for some reason the archive became corrupted it could be very difficult to recover the files inside it. There is an advantage that the archives are compressed but another option is to set up your backup disk to compress at disk level.  I tried doing this by formatting my backup disks as BTRFS but then the support for this changed in Buster and I am unsure if there are still issues with it. I will have another look at this especially if there is any option with compression support in ext4.

The idea at the moment is I would use rsync over SSH to back up each computer using a dedicated backup computer with a full backup about every three months and incremental backups about weekly in between. I have 2 TB disks for the full backups and 1 TB disks for the incrementals. Currently a WD Blue 2 TB disk suitable for a backup costs $105, while a 1 TB is $78. I'd need to get some more of the Raidon drive caddies for extra disks and the leather bags that they are stored in.

Rsync or anything that copies files and folders as is is a better more transparent option than using something that uses its own structure and archives that leaves you dependent on it being supported and developed as well. Rsync is very well optimised for syncing and the idea is it can do incrementals on top of existing full backups. rdiff-backup does the same sort of thing except that unlike rsync it will leave the original file unchanged, which rsync will overwrite (when resyncing to the same backup destination). There are a few possible schemes for backup. One possible is to put the backup computer in a remote location (not really possible in a house) and have it sync unattended a month's worth of data at a time then change the disk for another one and let it do another month, it would be copying all three computers' data on the same disk. Another option is two disks to do the incrementals with the disks exchanged weekly. This is the more likely scenario. Probably each disk will store 6 incrementals each at 2 weeks (the full backups happen each 3 months).

Anyway there is a bit of detail to work out and probably some more disks and trays etc to purchase to get this scenario all set up and hopefully it will come together soon.

Tuesday 15 October 2019

How to create a desktop menu entry manually in Linux

This is kind of an odd thing to be posting about nowadays with Linux desktops mostly including a menu editor or installable third party packages like menulibre and alacarte being able to achieve the same outcome. However, LXQt does not include a menu editor that is comparable to the above options, and whilst both of the above can be installed in Lubuntu or Debian/LXQt, the results of running them are confusing. It is possible with PCManFM-Qt file manager to edit applications shortcuts directly in a specific area of the standard user interface but that is confusing and difficult to understand. Fortunately the majority of desktop environments support a standard method of creating shortcuts as .desktop files stored in a standard location.

The .desktop files are created in either of the following locations: /usr/share/applications or ~/.local/share/applications. The difference between these is that the former location is computer based and the latter is user based. For this example we have downloaded Firefox Developer and extracted it from the download file. This does not give us that menu shortcut so we have to create it. Typically we will copy the FFDE files to ~/Applications folder and create a local shortcut in ~/.local/share/applications which has a double advantage at reinstallation time that both the application and its shortcut are located in the user profile and therefore do not need to be reinstalled when the operating system is.

The typical format of the .desktop file looks like the following (in this case the actual parameters used to start Firefox Developer from where we installed it):

[Desktop Entry]
Categories=Internet
Comment=Firefox Aurora with Developer tools
Exec=/home/patrick/Applications/firefox-dev/firefox %u
Icon=/home/patrick/Applications/firefox-dev/browser/chrome/icons/default/default128.png
Name=Firefox Developer Edition
NoDisplay=false
StartupNotify=true
Terminal=0
TerminalOptions=
Type=Application
Version=1.0
 
Some of the above might not be needed.

The key issue, however, is that LXQt (at least on Lubuntu, but probably also on Debian) does not seem to recognise user specific icons as part of its menu. When one was created like the above, it ended up being put into the Other submenu rather than the Internet submenu. Likewise one could be created with menulibre and would be created in the same location by default (~/.local/share/applications) but ran into the same issue that LXQt would push it into the Other section of the menu. I am still experimenting to see how this can be overcome but for now, either method is possible but if both are creating the same outcome then using menulibre is obviously preferred as a GUI.

Friday 4 October 2019

Arlec Plug In Heater Controls [1]

Arlec is an old established Australian brand of electrical products (such as extension cords and plugboxes, timers etc) that has been available in NZ for many years - at least 40 to my recollection, their innovative designs and features continue to the present day.

With plug in electric heaters sometimes certain desirable features that would be easy to fit for a wired in heater, such as a time delay switchoff or a thermostat, are not always easy to find. I can recall various brands having come and gone, for example countdown timers from BenQ and HPM come to mind. Likewise various brands of plug in thermostat. Arlec is now making both types of product and selling them through Bunnings. (I also remember Kambrook plug in timers with some affection from my youth but they have diversified more into household appliances nowadays). I do have still a Honeywell plug in timer fitted with a cord and interrupted phase tapon plug which can be mounted on a wall some distance from the heater and this is still going strong more than 30 years later.

The Arlec PC900 plug in countdown timer is a mechanical timer giving 2 hours delay and simply plugs into a 3 pin outlet and the heater or other device plugs into it. 

The timer appears to do what is expected of it. However due to its design, it will block an adjacent outlet in a plugbox or multi outlet wallplate. This appears to be a design factor with Arlec products in this range (several different types of timer including the PC697 digital 7 day timer).

Newly available from Bunnings is the THP401 which is clumsily described as a "temperature controlled programmable timer". 

So far as I can ascertain (it wasn't on the shelf at the Bunnings store I visited today) this should really be described as a digital plug in thermostat. Because Arlec do not appear to have added this model to their website yet and because there was not one on the shelf at Bunnings I was unable to verify the description of the unit.




Wednesday 2 October 2019

Using a Windows 10 computer as a media player

UPDATE 19-10-7: Due to cost the Win10PC was put into an old obsolete chassis and parked in a corner of the room so it can be used when needed but it will not sit on my desk and will hardly ever be used. This was accomplished using an identical spare G-E350 WIN8 mainboard and the boot disk removed from the Mini-ITX chassis and it works fine. The Mini-ITX chassis was then set up with Lubuntu using the 500 GB HDD that was in it for a data disk for Win10PC.  x11vnc has been installed to enable remote control for maintenance purposes and with Kodi it works a lot better than when it was running Windows 10.

UPDATE 19-10-6: Whilst this computer will do for now I am looking at replacing it when I have the resources with a new Raspberry Pi (the new RPI 4B has a significant performance improvement over the 3B that I already have) or NUC running Linux just because it is getting a bit long in the tooth with performance issues, but in reality most of these issues are related to Windows 10's excessive resource demands on it. The other option being to put the Windows 10 computer into another chassis and release this chassis for the spare board of the same type and run that on Linux so I will consider the options for that.

UPDATE 19-10-3: After the heap finally managed to update itself to Windows 10 release 1903 (a whole drama and story in itself, which has taken months of failing to install the update, and me deciding I just could not be bothered troubleshooting, then finally it managed to install itself successfully last night), Kodi started having playback problems only with videos, every few seconds the audio would drop out momentarily. The solution I found that worked was to go into Settings, Player Settings, Videos, change the rendering method to DXVA and turn off DXVA hardware acceleration.

Back in December 2014 I got an Antec Mini-ITX chassis to use with a Gigabyte GA-E350 WIN8 board that I had purchased. I actually had two of these boards (the other one is stored as a spare). Given it's almost five years I'll elaborate that I originally got these boards, which were very cheap since they were designed for low power compact computers needing a power supply of less than 50 watts and have an integrated AMD E350 CPU. These photos were taken at the time I built the system and are missing from the article that is on the blog and I need to put them back in it sometime.

The bare chassis looking through the grille on one side which is for cooling, through the I/O plate on the back.
 The partly assembled system. The board is installed and screwed in place and the internal cables (mainly front panel USB / switch / LED) are just loose waiting to be plugged into the board. You certainly need the low profile cooler on this board because a full size one would have trouble fitting inside.

 The system assembled and running, with the display showing a setup screen. The extra cables that weren't in the previous picture are for the hard drives.
 On the back of the chassis under a blank panel is the mounting plate for the 2.5" hard disks (there is room for two of them). You can see one is installed here with the power cable hanging loose in the 2nd mounting position. Behind the HDD is the bottom of the motherboard. The mounting plate is detachable because the disks sit on the inside of it and there are insulating pieces supplied to prevent the HDDs from shorting out on the metal mount plate.
Looking inside the complete system installing Windows. The cables have been tied back in place ready for the lid to be put on. The power input cable, which is for 19VDC from an external power supply that is supplied with the system, can be seen upper right. There is a small power supply board inside the chassis that generates the usual voltages (+/-12V +/-5V +3.3V etc) at up to 60 watts for the system with the usual ATX power connector.
Anyway if you followed this blog you'd know that this system has ended up as my one and only Windows computer and now has 10 Home running on it, and until recently was hardly ever used. But now I want to get some use out of it so I've been setting it up as a media player computer for bedside use, which means mostly playing music or playing videos with the screen turned off. The main issue with an older system like this one is the CPU won't have the codecs built in for playing a lot of video which means it can't handle the higher resolution or high bit rate videos well, since video playback on most operating system these days depends on the CPU doing hardware decoding, a key reason why I upgraded a Sandy Bridge system last year because it had trouble playing back WEBM videos. In this case for the use this system is being put to, that isn't a problem, and Youtube videos that stutter can be changed to a lower bit rate in the browser.

So what software do we want to use compared to what we can use in Linux? Kodi is available on Windows and it worked fine this time. I had trouble with it on a previous Windows installation where there was no sound. Windows Media Player has been out of Windows for so long that I have almost forgotten what it was like, and the alternative app built into 10 is nowhere near as good as Kodi. The other main application needed is a sync tool to sync the video and music libraries from another computer. Since I use NFS for my day to day networking stuff in Linux and MS doesn't make their NFS networking stuff available to Home Editions I had to set up a Samba share on the mediapc and network using the built in SMB/CIFS support for Windows. Karen's Replicator is the ideal program for syncing. I used to use this to back up my Windows computers back in the day and it does everything needed to get a full mirror that also replicates deletions on the source.

Friday 20 September 2019

How to PERMANENTLY disable an input device in Linux (Xorg Display Server)

About 2 months ago I posted on how to use the xinput command (on Ubuntu and derivatives) to disable an input device. This, however, only works until the next time the computer is restarted. In addition, not all distros provide the xinput command; Debian doesn't, and I haven't done any research to determine whether this command needs to be installed or if there is a Debian specific alternative. For the time being, the device in which I need this function to work is my Dell E6410 laptop with an Alps pointing stick which is drifting. By default this is enabled alongside the touchpad, so you can use either of these devices, but there is no simple way to disable the touch stick, even if you plug in a USB mouse as I often do.

The answer for when the Xorg display server is used (generally this is the default display server based on X11 for many years, that is now gradually being superseded by Wayland) is to change the configuration settings in a file that is stored in /usr/share/X11/xorg.conf.d/ and in this case the file name to be edited is 40-libinput.conf . In order to effect the disablement, the xinput list command was first used to get the name of the touchpad device, and then this section making use of that name was added to the bottom of the aforementioned file:
Section "InputClass"
        Identifier "disable touch stick"
        MatchProduct "AlpsPS/2 ALPS DualPoint Stick"
        Option "ignore" "on"
EndSection

So that section in the file is pretty straightforward. The MatchProduct entry contains the exact device name string that we got from xinput list and the Option specified "ignore" "on" tells the Xorg server to ignore the device and not enable it.

After doing a reboot the touch stick was found to be out of operation and the xinput list command no longer lists this device.

As I noted in my previous post, the flaky touch stick on this laptop has been a significant issue since it will cause the mouse pointer to drift across the screen quite randomly even when an external mouse is in use and possible options to disable the touch stick at a hardware level (there is for example a Bios setting that in theory would disable it with an external mouse connected) have not worked so far. I did not investigate any possibility however of physically disconnecting the device inside the laptop, but I believe this could be quite difficult as the touch stick is physically located between the keyboard keys and thus it is probably at a hardware level integrated into the keyboard itself and therefore could not be unplugged.

Having this option to disable the touch stick is proving extremely useful since the laptop is otherwise in very good condition for its age and with the installation lately of a 160 GB SSD in place of the regular hard drive and the replacement a few years ago of the original battery, with the limited amount of use since then meaning the new battery should still have at least 90% of its life left, if carefully looked after it will keep going for another decade probably.

I am guessing this xorg.conf.d file entry will work with most distros (including Debian) even if xinput command doesn't work on some.

Tuesday 27 August 2019

Setting up a second swap partition for Linux

In my last post I waxed lyrical about the merits of the Linux file system and how it is possible to have two swap partitions. Today I am setting up just that in my computer. It has an existing SSD with 100 GB available for swap, and due to an upgrade on the other computer, another 120 GB SSD has become available to install into it to add a second swap partition thereby more than doubling the swap space. This means the system should be able to edit very large Gimp files up to 50 GB whereas at the moment it can only handle ones up to about 20 GB because of all the other stuff I usually have running on it.

The first step to achieve this, having installed the disk and found it is in the system as /dev/sdc, is to make life easier and install GPartEd, the graphical partition editor (preferable to using command line in this instance). We can then use GPartEd to remove any existing partitions on the disk, and then allocate the entire space on it as a swap partition.

The next step is to run the mkswap command on our new partition (although it looks like GPartEd in fact already did this step)
mkswap /dev/sdc1

And then we can use the swapon command to activate it:
swapon /dev/sdc1

after which running swapon -s will list the swaps currently in use which we find lists both of them and also conveniently the KDE system load monitor widget tells me the swap is now 198 GiB in size (it was 87 GiB and the new SSD added 111 GiB).

To permanently add the second swap partition it needs to be put into /etc/fstab:
UUID=83c0e622-bcdc-41fd-a1ce-ec3f6c88bfcc none            swap    sw              0       0which is basically copying the previous swap file entry except for the UUID which we get from running blkid

Then I tested by two Gimp projects at the same time that are around 40 GB in total. Originally the first one would not even load in the system, while the second one would use the entire swap space. Loading both together only used less than half of the new swap space, making it possible to work on them without any concerns about the system running out of swap and crashing Gimp.
 


Linux has the best filesystem ever :)

I've been in the IT industry for something like 30 years (longer if I was to include the time in my secondary education) and in that time I have used a few operating systems, starting with Apple DOS 3.3, UCSD P-System, CP/M, Acorn MOS, Macintosh System 6/7, Netware, MS-DOS, Windows (since 3.1), macOS and Linux. Obviously with the evolution of computer hardware, operating systems have evolved with better functionality, so we can't really compare anything current with the old Apple ][ or BBC Micro Model B. However, Windows and Linux can be compared as they are both current, and Linux has definitely stood out as having the superior architecture to Windows, which reflects the former's well designed structure and its roots in the high end mainframe computing, compared to Windows' basic architecture derived from exclusively PC basis.

So what is important and relevant on the Linux filesystem structure are the following things:
  • Linux is engineered from the ground up to allow different parts of the core filesystem to be mapped to different drives or partitions. On Windows part of the core filesystem has to be on the same partition, C drive. Hacks to move, for example, the home drive to another partition, whilst provided for in a registry key setting, will actually break the service pack or version upgrade process, and therefore can only be used by technically savvy people who can manually engineer their own updates while backing up their home partition and migrating their data to the updated version. Since Windows 10 has frequent version upgrades, it is pretty well impossible for an end user to migrate the home drive.

    Linux, on the other hand, makes it very easy to map /home to a different partition. This is a real advantage when it comes to reinstalling, because you can keep the home partition intact and just reinstall the operating system and re-use your profile. That's another thing too - Windows will not allow you to re-use your existing profile when you reinstall. It will spew if you try to do that.

    I give my systems a SSD for OS install and put /home onto a RAID-1 hard disk array.
  • In Windows your paging or virtual memory goes into a file. In Linux, swap goes into a dedicated partition by default. I suppose you can tell Windows to use only a special partition for its swap file, but the default is to put it on C drive. The issue with using a file is disk fragmentation, and potentially running out of disk space to store it, further exacerbated by being unable to relocate your home folder (see above).
  • In Linux, you can spread your swap across more than one partition. This means I can simply expand my swap by adding another SSD into the system. That's about to happen in fact, so that I can have more virtual memory available for editing large Gimp projects. Windows has no such capability; the paging file all has to be on the same partition.
And there's a whole lot more after that, including the ability to support a whole pile of different disk formats; NTFS is pretty good but MS made it proprietary making it harder to exchange data between different types of computers.
  •  
  •  

Tuesday 20 August 2019

Tamron AF 70-300 Di LD telephoto lens for Canon EF-M?

UPDATE: Since first writing this I discovered I can get the EF-EOS M adapter from Canon for $199. Although having to add this to the camera won't really help to make adding lenses to the EOS-M affordable, it would give me the option to use the telephoto lens from my existing EOS 600D with the M100. Assuming it actually is available at that price, it would have fallen sharply in cost from the original pricing which was around $500-600 for the adapter.

One of the obvious factors about owning an interchangeable lens camera is that we expect to be able to change lenses when we need something like a telephoto or wide angle instead of a standard lens. Since I own a Canon EOS M100 camera, I'd hope to be able to fit it with a tele zoom one day and be able to take photos at considerably greater distance than the lens that comes with it, which is a Canon 15-45 mm collapsible zoom. The Canon zoom lenses for the EF-M mount, however, are very expensive, priced around $600, which is almost as much as I paid for the camera itself, making it inevitable I'd want to take a look at a third party lens. 

Industry commentators have long remarked on Canon's extreme reluctance to threaten their profitable D-SLR market segment with mirrorless cameras, given the less than inspiring design and high pricing of the early EOS M models. Canon has insisted this is not the case and have brought out a much wider range of EOS M models with lower pricing in recent years. However, there is only a very limited range of extremely expensive Canon lenses and accessories produced for the EOS M models, compared to what is available for the EOS D-SLR in the standard EF mount. The cheapest Canon EF-M zoom lens costs around $550 retail. It's very clear that Canon is only prepared to position the EOS-M series as high-end compacts with deliberately strategisation to discourage end users from looking for an interchangeable lens camera at a cheaper price than their DSLR range.

Current pricing on the Tamron AF 70-300 Di LD lens is around $287 which is a very low price for a lens (the actual range of the zoom may be somewhat different as it will vary depending on the crop factor of the body it is fitted to, this is unclear at the time of writing). At that price it is made of plastic like the Canon kit lenses. There are a few other limitations that you would expect with a cheap lens like a front element that rotates (tricky with some types of filters that are angle dependent), slow autofocus, no image stabiliser, sticky zooming, blurry corners, some distortion, telephoto fringing / softness. The results will depend on the type of camera it is attached to, with a smaller APS-C sensor the edges of the lens are missed which gives better quality in many respects, but does not overcome the mechanical / design limitations.

The main issue is that I am finding it difficult to discover if this lens is actually available for the EOS M. If it should turn out that Canon has placed restrictive licensing on third party lenses for the M mount it will achieve their strategy of deliberately strangling the market for EOS M lenses. So far it looks like Tamron's only offering is the much more expensive 18-200mm F/3.5-6.3 Di Ⅲ VC, which whilst being a better quality lens, ends up costing practically the same as Canon's lenses. It must be apparent that Canon has placed severe restrictions on the production of affordable third party lenses for its EOS M cameras.

Wednesday 7 August 2019

Python Scripting [6D]: Layer Sidecar Duplication & Renaming for NZ Rail Maps 4

Since we last looked at this topic a couple of months have gone by with everything working as expected when the duplicate script is used, both for duplicating sidecar files for layers that are the same size as the original and ones where the destination layer has been increased in size. The former is the most common usage although the latter case was the original reason the script was devised, but being able to carry out simple duplication for layers that are the same size as the original has proved extremely useful as this was originally a manual task requiring copying the files and renaming them which was tedious even allowing for Thunar's built in bulk renaming.

There has now been one refinement added to the original script which is an additional command line parameter -o or --overwrite which is for the purpose of specifying that existing files can be overwritten. If this parameter is included in one of the forms shown (no value required) then overwriting will be allowed, otherwise it will not be allowed.

To implement this in Python we have to add an extra line to the argument parser initialisation code block:

parser = argparse.ArgumentParser(prog='duplicate')
parser.add_argument('-s', '--source', required=True)
parser.add_argument('-d', '--dest', required=True)
parser.add_argument('-o','--overwrite', action = 'store_true')
parser.add_argument('-m', '--multisuffix', type=str, default="")
parser.add_argument('-p', '--pixelsize', type=float, default=None)

(the new line being the one in italics, the rest are existing)

The difference being that as no parameter value is passed with the command line parameter, an action must be specified, which is 'store_true'. The way this works is that if the parameter is specified then its value is set to True, otherwise it is set to False. (I am not sure how it comes to be False but this must be some sort of default or something ???)

Then we collect the parameter value with all of the others:
overWrite = args.overwrite

Since there is a block of a number of lines of code used to write the world file that would have to be duplicated, we put this block into a function and then call it as required by function name. So at the top of the script there is this function definition:

 # function to write new world file
def writeWorldFile(wFPName,wFLines,pSize):
    # write new world file
    wFile = open(wFPName, "w+")
    if  pSize is None:
        wFile.write(wFLines[0] + "\n")
    else:
        wFile.write(str(pSize) + "\n")
    wFile.write(wFLines[1] + "\n")
    wFile.write(wFLines[2] + "\n")
    if pSize is None:
        wFile.write(wFLines[3] + "\n")
    else:
        wFile.write("-" + str(pSize) + "\n")
    wFile.write(wFLines[4] + "\n")
    wFile.write(wFLines[5] + "\n")
    wFile.close()

Then when we are going to write the world file it looks like this:
           # copy world file
           #  first read existing world file
            worldFileName = fileNameBase + ".jgw"
            worldFilePath = os.path.normpath(rootPath + "/" + worldFileName)
            worldFile = open(worldFilePath, "r")
            worldFileLines = worldFile.readlines()
            worldFile.close()
            WFNNew = fileNameNew + ".jgw"
            WFPNew = os.path.normpath(rootPath + "/" + WFNNew)
            if overWrite:
                    print worldFileName + " -> " + WFNNew
                    writeWorldFile(WFPNew,worldFileLines,pixelSize)
            elif not os.path.exists(WFPNew):
                print worldFileName + " -> " + WFNNew
                writeWorldFile(WFPNew,worldFileLines,pixelSize)
            else:
                print WFNNew + " exists --------------------"

That could be simplified and made more efficient by putting the read operation into another function and only calling it when actually needed. At the moment the read is performed even if the write is not performed. 

The code for duplicating the two xml files is basically the same for both and the code block looks like this:

            # copy xml files
            auxFileName = fileNameBase + ".jpg.aux.xml"
            auxFilePath = os.path.normpath(rootPath + "/" + auxFileName)
            AFNNew = fileNameNew + ".jpg.aux.xml"
            AFPNew = os.path.normpath(rootPath + "/" + AFNNew)
            if overWrite:
                print auxFileName + " -> " + AFNNew
                shutil.copyfile(auxFilePath,AFPNew)
            elif not os.path.exists(AFPNew):
                print auxFileName + " -> " + AFNNew
                shutil.copyfile(auxFilePath,AFPNew)
            else:
                print AFNNew + " exists --------------------"
 
The same block is duplicated in the script with the .xml extension instead of .jpg.aux.xml
This could be made more efficient by putting this entire block of code into a function and simply passing the two different file extensions to it in two separate function calls.

Probably after doing more testing I will tidy the code up further along the suggested lines.

There is also an issue with some of the aux files possibly not being copied because every so often I will see a "CRS was undefined" message from Qgis when I load a duplicated layer. I have not yet tried to work out what is happening but the information about the CRS is stored in one of the xml files and there must be an issue happening with the file copy operation or something else happening that I haven't worked out yet.

Thursday 1 August 2019

Python Scripting [4D]: Exif Based Image Renaming 4

After testing the Exif renaming script for the past couple of weeks I have added a piece of code to check for collisions and rename any file that collides with an existing filename. With Exif data, the most likely cause of a filename collision is when a camera is set to shoot continuously and it takes several pictures a second. If the camera doesn't set the subsec time field then the exif string for the time the photo was taken will be identical for several photos and if used as the basis of a filename the result can be identical filenames. Another possibility is you own two cameras of the same type and you let someone else use the second one at the same time as you are using the first one (for example there are two of you covering a big event) so you can possibly get duplicates that way.

When I used to use IrfanView it had some sort of collision handling function built into it based on the one that Windows Explorer uses, which as some will be aware handles copying files to a new destination when it puts an extra number in brackets on the end of a filename. Apart from the fact that on Linux we obviously don't have the use of Windows Explorer, this wasn't able to be accessed for exif string renaming, so I wasn't able to use it and had to manually rename colliding filenames. There are a few older directories in my archive of photos on the computer that do have duplicates which in that case came about by the files being copied to somewhere else and then being copied back to the original folder, in this case the duplicate has an extra string of numerical digits on the end of the filename, which was what happened after a collision had occurred with the first time of copying some files, a second copy operation was run using the IrfanView capability to add a sequence number to the end of the filename.

So for this script I planned on adding a filename collision handling functionality to the code to deal with these possibilities. This has consisted of writing a function and calling the function from within the main script because it is needed in two places. Functions are fairly easy to do in Python and like any other programming language they are for more than just where you need to reuse a block of code. There is in fact a strong case to write all of your code in functions and then the main execution block just consists of a series of function calls and is very neat and tidy. I haven't done this yet with any of my scripts but I will start doing it with the next project.

The function is fairly straightforward and looks like this:

def fixCollision(fn): # handle filename collisions. fn is full filename with path
    p = os.path.splitext(fn)
    f = p[0]
    e = p[1]
    x = 1
    ff = fn
    while os.path.exists(ff):
        ff = f + "-" + str(x) + e
        x += 1
    return ff  

The first line is the file definition and an explanatory comment. The subsequent lines of code are focused on splitting the filename into its base and extension, and setting the initial value of the collision counter x. We then set the initial value of ff, the complete filename string that is tested in the while loop, to the filename that was passed into the function, and then enter the loop where the while clause checks to see if the filename exists. If it does then it makes a new filename by inserting the value of x with a dash separator between the base and extension, increments x for the next time around, and then goes back to the top of the loop. As soon as a filename is found which doesn't collide (which may include the filename that was passed into the function) then it exits the loop and it returns the filename to the calling code.

This is called within the script in two places (the code blocks that handle images and non images) as follows:

                    destFile = os.path.normpath(destPath + "/" + destName)
                    if not os.path.exists(destPath):
                        print "Create destination path " + destPath
                        os.mkdir(destPath)
                    destFile = fixCollision(destFile)

This code block is just before the move operation that moves the source file to its destination and renames it at the same time. In other words where we have a destination to move the file to and we move it to the new location and give it its new name as well. The first four lines in that block are creating the destination file path from the new direction and the new exif-based file name string (or a different process for non-exif files), checking for and if necessary creating the destination directory, and calling the collision resolution function. Essentially this results in ensuring the destination filename is unique within the destination folder.

The only other issue which has come about from the script is with the file permissions for the originals off the camera. As far as the computer and me are concerned, I only have read permissions to the photos and that is unchanged by the script. I am currently considering whether the script should change the permissions on each file as it is renamed. However there is an advantage in preventing the files from being changed in the Photos folder as they should be copied before any alterations are made and it may be that I will just leave things as they are.