Saturday 22 August 2020

Bye Bye Blogger

This blog is now closing and moving to Wordpress. This is due to Google’s arrogance in forcing everyone on Blogger to move to a bug ridden new user interface that they should have spent more time testing and fixing before they made it mandatory. Google has also been dictatorial in forcibly removing email notifications on Youtube recently. Hence we invite our readers to continue reading our new blog at http://nztechonverse.wordpress.com

This blog and all the existing articles will remain up for as long as Google decides not to remove it. We aren’t taking the step of migrating the existing blog to Wordpress but all new posts will be made there.


Thursday 20 August 2020

General computing update - week 34, 2020.

 Since writing early July about about the need to replace the disks in mainpc I have finally progressed towards this stage by purchasing the first of two disks. As noted in the earlier post, this was prompted by an I/O failure of one of the disks that resulted in it dropping out of the RAID array. However I was eventually able to add it back into the array and it has worked properly ever since. The first disk has been added to the existing array which is currently a three disk mirror. As soon as mdadm has synced this disk completely, I can then remove one of the existing disks off the array and then take it out of the computer, as I need backup disks right now. As noted at previous article the disk array will be the same size as current. I am expecting to buy the second disk in about another month and install it in the same way so that when it is also installed, the old disks can be removed completely, and all without actually copying any files. Last time when I did an installation I manually copied the files, but that was mainly because I doubled the disks from 1 TB to 2 TB; this time I will be staying at 2 TB.

Another techical task I have been working on around the house is erecting a TV aerial on the outside of the house. This has meant installing a cable duct and two faceplates between the inside wall and the underside of the soffit to take the aerial cable. That was the hard part as running the cable through the duct was very simple as was putting the aerial itself up. The TV itself is wall mounted and the cable comes through directly behind it. As I am in a prime reception area, the aerial does not need actual line of sight to the transmitter – there is actually another house in the direct line, but the signal is so strong that there is enough signal to get a good picture on every channel.

A few weeks ago I had some problems with a new package-based version of Qgis that would crash every time I tried to run it, on more than one computer. I had to install a flatpak-based version of Qgis, although I could have also run it in a virtual machine. The project developers built a new release after several weeks and it started working again, as did the next edition. Due to the disruption, I have implemented an extra precaution of testing every new release on a VM before installing it onto a working computer.

Google has again heaped glory all over themselves by implementing an arbitrary and dictatorial change to the Youtube platform. As of last week, they stopped making it possible for Youtube to send out notification emails whenever a new video is uploaded or streamed onto a channel. This has left a lot of people, including me, very annoyed by the decision. It is similar to the nonsense of the new Blogger user interface that is badly broken and which has forced me to copy and paste all my posts from another editor like LibreOffice. As I am doing right now with this post. Contrast that with the approach taken by Wordpress.com with a new editor system introduced a few months back; it was pushed through, but they did a lot of consultation with their users. In this case, Google has just announced the changes, but if there are any issues, it’s a one way conversation filing a problem report, just like the way you cannot actually have a conversation with Facebook about bugs or problems on their platform most of the time.




Thursday 6 August 2020

PDF Editors For Linux

This post is brought to you by the new Blogger interface Google has forced on everyone! Which is dreadful! The only way I keep my sanity editing posts on Blogger is to create the post in a separate editor (LibreOffice Writer in this case) and then paste the completed article into Blogger. There is much angst in the Blogger users’ community over the broken functionality in the new interface and the fact it has been forced into use with what they believe to be insufficient testing. My experiences with it exemplify those experiences, but this is the only one of my regular blogs that is still hosted on Blogger, and the workarounds for me are sufficient until such time as they fix all the bugs.


Today’s post is about editing a PDF using free/open source software. The PDF format as we are generally aware has historically been an Adobe thing, and so has the main editor software package. Everyone knows and uses Adobe Reader on various platforms, but relatively few people use Acrobat, the expensive commercial package that can edit PDF documents. Hence, a few alternative solutions have been developed, and the abilities of these are improving all the time. Here are my takes on a few of them.


My particular requirement here is filling out a PDF form and inserting my signature. It’s fine to be able to fill out the form in Okular (KDE’s in house PDF viewer) but inserting a graphic is impossible. So I looked at some of these alternatives:


LibreOffice Draw is part of the LibreOffice suite and can read and edit PDFs as files made up of individual elements. In my brief examination of Draw, the main concern I had was that it would be able to output the document looking like the original after editing; it seemed to have difficulty converting all of the text to typefaces that would fit cleanly into the original format. Because of this, I have not explored Draw further for my particular requirement at this stage.


Inkscape is a well known graphics editor that has a lot of features and is one of a few favourite graphical editors I have installed on my computer. I haven’t looked very deeply into its capabilities because the major limitation I have observed so far is that it can only handle a single page PDF; there is no obvious way of working with multi page documents.


Most of the full editors that are available are paid only. PDFSam and MS Word 2019 are examples that are Windows only. I have no desire at all to spend money on any type of Windows computer, or even a virtual machine, just to run these solutions. PDFstudio is an alternative that is available on Linux. The Pro edition that is capable of PDF editing costs $129 to buy and is licensed for 2 computers. It would be interesting to evaluiate this product at some stage to see if it is worth purchasing in future. Master PDF Editor is another product I might evaluate, it just puts a watermark on each page but it might be possible to remove that with one of the free editors.


Scribus is a FOSS desktop publishing package that also can open PDF files. Version 1.5 which is currently a development edition and only supported on most distros as an AppImage. I found however it has the same issue as some other packages of being unable to render fonts in the previously filled out PDF form.


Ultimately for this particular situation, needing a quick and easy solution to create my PDF and get it useful for my requirement, I have used Gimp which will import each page as either a layer or a separate image according to a selection choice when opening the document. It imports the pages as graphics, but you can fill in a form in something like Okular, save it to a new document, and then inserting a signature as a graphic can be done in Gimp, then export each image to a new file and paste them into a new document and export it back to PDF. A complex process for just one form but it lets me send my document completely filled out complete with signature because Okular cannot do the insertion of a graphic into the appropriate place on a PDF. I think this Gimp solution will be the best short term but I will still be interested in evaluating other possibilities in future.

Tuesday 21 July 2020

General update week 30, 2020

I wrote about a lot of fun getting a system to work with static IP addressing (it is running Debian 10 with LXQt). Since it was able to be made to statically configure its IP with no issues, I configured the other Debian10-LXQt system with a static configuration also. The second computer only has one network adapter, but the benefit is that since the network router and wireless broadband router are only switched on when needed, I do not need to wait for the network router to come online each time and be ready to supply DHCP addresses to the client computers when they start up. Everything is working well now.

At the present time the backups are all being done locally on each computer as previously referenced using rsync since I have yet to do further work to make an rsync daemon function as planned for trial for backups. Another option I could look at for backups is the BackInTime application, however I have been generally wary of using anything that does not do straight file by file backups to the destination but instead creates its own archive format. This is not such an issue with commercially supported proprietary software, but I am very insistent that I do not get locked into archive formats with open source software that may not be supported forever. Probably this is the reason why lots of people prefer just to use rsync.

mainpc had an issue a couple of weeks ago when some sort of timeout on one of the RAID array disks caused mdadm to push it out of the array. It was then coming on and off line seemingly randomly for a time, but then stabilised and stayed online. Smartctl testing showed there were no errors on it. After a week it was readded to the array which is now functioning at 2 disks again seemingly with no problems. However as these disks are 4 years old (last replaced 2016) I am currently planning to buy 2 new disks for the array as soon as I can, this will be at the rate of 1 disk per month with the old ones being available for backups although it would still be preferred to buy completely new backup disks. 

The disks in the array currently are WD Black and that is the proposed replacement at $250 each. SSD is considerably more expensive but by the time I next have to replace any of these disks the cost of SSD could well have come down and be more affordable. WD Black SSDs in the size needed are  very expensive at around $1000 regular pricing each (four times the HDD price) and are only M.2 format which is problematic since it is rare for a board to have more than one M.2 slot at least in the price range I have. I have checked and the H-97 and B250M-D3H boards do have one M.2 slot each but not two. There are adapters available but another option is simply to switch to another SSD brand such as Intel which is considerably cheaper although with lower performance, if the performance at least equals standard HDD it would be a big reduction in cost to around $400 each. As I think, in a few more years getting a pair of SSDs for the array will be within the same price range as the regular HDD. I am still keeping the lid on the amount of storage on this computer resisting the temptation to buy bigger disks as in spite of growing photo and general data storage, so far it has proved possible to manage existing storage sufficiently well to avoid running out of space.

Monday 13 July 2020

Building a Linux PC for under $1000 (TechRepublic)

TechRepublic have an interesting article on building a great Linux PC for under $1000. Although it's clear the article refers to US dollars, which in price terms is usually lower, the prices are mostly comparable with what we pay in NZ. 

My last couple of build experiences were different. I built a low cost system using a Gigabyte H110-S2H board, my usual choice of a Pentium G CPU, and 8 GB of RAM for around $300 about 3 1/2 years ago. I think initially it was installed with Windows 10. It was cheap because I recycled a case, power supply and hard disk, so I only had to pay for the board, CPU and RAM which cost about $100 each at the time. The board was taken out since then and put into a different chassis and it has been repurposed for video playback, because it only has 2 DIMM slots. I must have been really strapped for cash at the time as all the other systems I have built used slightly more expensive board with 4 DIMM slots.

Last builds I did were buying a pair of Gigabyte B250M-D3H boards and Pentium G CPUs for a total of about $500, just about 18 months ago. I had planned on getting newer B360 boards and Pentium Gold CPUs, but there was a shortage of the CPUs at the time, and I discovered Dove Electronics, a wholesaler we used to use when I worked in high schools, had only two of the B250 boards left, probably the last stock of this model in the country. I managed to get these shipped to me along with the CPUs and 32 GB of RAM for one of the boards (another $500), then rearranged a whole lot of things. The aforementioned H110 board being taken out of a chassis and replaced by one of the B250s with all the new RAM, and the other one going into a new system with some RAM from the H110 board (this had had 16 GB previously but that was split between it and one of the B250s). 

The oldest thing I have here is a Gigabyte Z97 which is the computer I am writing this on, with a heap of RAM, I can't quite put a date on when I got it, but it is still going strong and probably has years of life left in it unless the board blows up or something. Because it has still got heaps of performance for everything I can throw at it.  The hard disks are the only parts I have had to replace and as one has been a bit finicky lately I might have to scrape together some cash to replace both of them soon which is a bit stiff with my current resources. This system is also the only one of my computers (apart from the Mini-ITX system) for which I have actually purchased a chassis, all the others being recycled from schools. The B250 that had only 8 GB recently got a boost to 16 GB which cost around $70. 

Anyway going from the article, pricing is pretty similar and if you had to buy everything new it soon adds up. The only really different cost I can see in there is for RAM; 32GB at current pricing in NZ is about double what he quotes ($275 at PBTech, which itself is a fair drop in price from what I paid 18 months ago). 

Building your own PC is actually pretty straightforward and well worth while. The most complicated task you would typically expect to encounter is installing the CPU and fan, and I have never had any issues there despite the potential for damaging the chip or board. I usually build my systems on a wooden base on a table (nude) and then install them in the chassis once I have verified everything is working properly. 

Considering that in 1997 my first serious PC cost nearly $4000 from a shop (with a cheap inkjet printer) it is amazing that the cost of parts has come down so much (no allowance for inflation). Even having a shop build you a PC today wouldn't cost anything like that much, maybe $1500 for a system built in NZ. Of course some companies are having systems built to order overseas and then shipped directly to the customer. If you use Windows you can get it for next to nothing with a shop or OEM built PC, the retail edition of it that you can buy for adding to an existing computer being a bit pricey (I have one here for the one and only Windows computer - the Mini-ITX system - that I still use occasionally for compatibility reasons). Running Linux on all my home built systems takes the OEM licensing cost advantage out of the picture completely, besides all the other benefits that Linux brings.

Tuesday 7 July 2020

Rsync Backup System [6]: Latest Progress / Using The Rsync Daemon

Because of issues with Rsync over ssh I am looking into using the Rsync daemon instead. The commentary on that is below. Until I can get that working properly I am backing up each computer individually using its own removable drive bay. This can be achieved by changing the ownership of the backup folder on the removable backup drive to backupuser and then  logging into a virtual terminal as backupuser and then running the backup under that user account. We use that account rather than root or our regular user account on the system being backed up in order to ensure the account has only read and execute permissions on the backed up computer so that it cannot in any way risk deleting files from the computer. This is very important because I have used the --delete flag on rsync to ensure files removed from the source are also removed from the backup. When doing an incremental backup that flag is important to ensure the disk does not end up filling up with removed files and running out of space. But it is also useful to have the safeguard that you can't accidentally copy files from the backup disk over the source files, especially with incremental where your backup disk is already full of files.

The simple means for an incremental backup that rsync supports is using an existing backup disk that has previously had full backup onto it. This enables rsync to compare the source and backup without any additional system to record file details that it can then work out if they have changed. In other words with each source file it compares to the existing backup then only backs up incremental changes. This is quite different from other backup solutions I have used that have a separate incremental backup disk.

To implement this system I will need at least 3 disks for each computer if I want to keep one full backup and two incrementals. In other words each disk will start off as a full backup, then two of them will be alternating incrementals, and a full backup from scratch will be done every few months and kept separately from the incrementals.

I would still have preferred separate incrementals, and in fact it is possible with two removable drive bays, one contains a full backup disk and one contains an incremental disk, then the incremental can be written to the incremental disk instead of the full backup. However the problem with this solution is that rsync cannot produce progressive incrementals since it is always comparing with the last full backup, that means the incrementals will not be a snapshot at a particular point since the last incremental, but will always be incremented from the date of the last full backup. It is also messy working with the extra paths to the different disks in the backup command, and therefore, more likely to risk mistakes like backing up to the wrong disk.

So in summary the best solution is using a separate disk for each backup, which will be an incremented full backup in the case of 2 disks and a separate full-only backup in the case of one disk.

At the moment I am down to 1 backup disk for each computer and so I need to buy some more disks along with the removable bay caddies and storage cases. I am also working on secure storage for the disks in the storage location which is away from the house. At the moment I have only been doing one backup about every 3 months because the RAID-1 array system in the computer is so reliable for day to day backup that I don't need backups too often. But I will attempt to schedule a monthly backup cycle in future.

Since I have been using Rsync as a backup solution, it has been very reliable, but there are occasional difficulties, and one I am having at the moment is the backup terminating with rsync error 12 partway through. I have done some fault finding but haven't been able to narrow down what is occurring that causes this termination but it is probably something to do with the ssh client in the source computer, which rsync connects to. This means I have to either try and find out what is happening with ssh on the source, or try the alternative, which is using the rsync daemon (service) on the source and connecting rsync to that from the backup server.


Firstly as I am on Debian, rsyncd is already installed with rsync. Next step is to configure the /etc/rsyncd.conf file which I will do from this sample provided in that article:


uid = 1001
gid = 1001
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log

[backupuser]
path = /home/patrick/
comment = backup patrick
read only = true
timeout = 300
     
 
There are some changes I am doing from their sample, the most notable being the uid and gid, which are set to the values for backupuser, which is the user that has permissions to do the backup on the source computer. Assuming rsyncd is run as root, this will ensure the permissions are correct. Also I am specifying the module name as backupuser, and it will automatically connect to the backup path for my home folder.

Start rsync with systemctl start rsync and set it for startup with systemctl enable rsync .

So next time I will list the result of testing with the daemon and whether it worked successfully.


Reinstall KDE system with LXQt [3]

This is the third article in a series about problems with KDE leading to a reinstallation with LXQt. Both running on top of Debian 10.

The computer concerned is running two NICs because it needs to access both networks that I have here. I have the main one based on a corporate wireless link from next door, which is filtered and throttled, and a second networtk based on a Skinny cellular broadband network modem, which is not filtered or throttled, although speed is moot since there are limitations inherent with 4G cellular data. The Skinny modem is connected to the local devices through a Ubiquiti AirRouter, which includes a DHCP server and network switch. It also provides the wireless network for phones or tablets. The capabilities of the AirRouter are much better than those built into the Huawei Skinny modem for internal networks and that is why I am using it.

There are some issues with LXQt mainly in panels but apart from that the reinstallation of software has gone well on the platform. One issue with this computer in general is that it needs more installed RAM and I am planning to address that in the next week or two, since I need it to be able to run a virtual machine at the same time as the range of stuff I normally use it for. One issue was reinstalling Gimp, I had difficulty in working out where Gimp keeps its config data as I wanted to copy over the config from my existing computer. It turns out that Gimp uses ~/.config/Gimp as a path, but apparently it also used a Flatpak specific path in ~/.var/ to store some data and there is considerable confusion over why it maintains these two paths especially as the user interface of the Preferences dialog doesn't actually point to the ~/.config path when it appears it should.

One of the interesting experiments this week has been to work with the mounting arms that I use to hold some displays. I have used Brateck wall mount arms with VESA adapters on them for this for some years. The key has been to modify these to allow fine control of the height so that when displays are stacked vertically, it is easy to put them close together. 

This photo shows how the attachment of the arm to the "wall" (in this case a vertical post attached to the desk) has been modified to allow its height to be adjustable. This is done with two right angle brackets which are bolted together using 80 mm length M6 gutter bolts, which gives theoretically an adjustment range of 80 mm.

I have to work around issues with the backup system from one of my computers at the moment. rynsc has not been able to connect over the network between the computers and running the backup locally on a computer seems to be the way to go at the moment.

Wednesday 1 July 2020

Setting up sudo on Debian

Debian is a bit different from Ubuntu and that is the reason I like it. Debian is much more focused for technically advanced people and, for example, has hibernation enabled by default, which contrasts with Ubuntu that makes it difficult to use hibernation, and there are many other differences in Debian that I prefer.

One thing with Debian is that sudo is not set up by default for the first user account and it's necessary to undertake a few steps to set it up. sudo is installed so all you have to do is add your user account to the configuration. This also underlines that under Debian 10, /sbin is excluded from the standard user's path in a terminal window and many commands will not work unless they are prefixed with the full path, something I have not yet got to a full understanding of the rationale for. When you go into a virtual terminal, that limitation is removed and it's hard to know why the restrictions have been put into the terminal window running under the desktop environment, when Debian 9 did not have those issues.

Anyway the steps to set up sudo are pretty straightforward:
  • Use the su command to enter superuser mode in a terminal shell
  • Install sudo if not already installed: apt install sudo
  • Run the adduser command to add your user account to the sudo group: /sbin/adduser <username> sudo
  • Log out and then log back in again
And that's it.

Reinstall KDE system with LXQt [2]; KDE network and display configuration extremely difficult with numerous bugs

As related yesterday I found various issues with KDE on a computer and decided to reinstall Debian 10 with LXQt as the user interface. After completing the installation I attempted to use ConnMan, which is the network configuration manager that comes with LXQt on Debian, to do a manual configuration of the 1.x network adapter to set its IP address manually without using DHCP. However, my experience has repeated that found in another computer where I had used ConnMan in the past, where it has continued to send DHCP requests for this network adapter despite the manual setting and overridden the blank "gateway" setting specified in the manual settings, with the DHCP settings. In this case, the setting that was overridden was the DNS server; it was configuring the system to use the 1.x network's DNS server address obtained via DHCP instead of the 4.x network's DNS server obtained via DHCP for the 4.x network adapter. 

As I have noted the Lubuntu network configuration manager does not have this problem and is definitely the best out of all three. However as I am not sure at this stage of being able to install this on the computer, my next step was to uninstall ConnMan and set the network parameters in the /etc/network/interfaces file. After uninstalling ConnMan, dhclient could not complete any DHCP requests for any interfaces; it seems that dhclient is configured to use settings provided by ConnMan and would have to be reconfigured to work with it. 

The interfaces file entries ended up looking similar to this

auto enp0s31f6
allow-hotplug enp0s31f6
iface enp0s31f6 inet static
           address 192.168.4.173/24
           gateway 192.168.4.11
auto enx00500b60c1eb7
allow-hotplug enx00500b60c1eb7
iface enx00500b60c1eb7 inet static
           address 192.168.1.173/24

In addition, /etc/resolv.conf contains the entry
nameserver 192.168.4.11

After completing these entries the system was rebooted and further checking confirmed the settings had been applied. In addition it was confirmed there had been no further lease requests to any DHCP servers, the lease for the 1.x network adapter (the problematic one that was pushing through unwanted default gateway and DNS server settings) had expired and had not been renewed. 

It therefore appears I have resolved so far the display issues and also the network issues. The only remaining issue out of 3 mentioned in the original article to be resolved is to get x11vnc running. Then there are the separate matters of reinstalling the software that was on this computer before I reinstalled it.

LXQt has some obvious differences from KDE and one of those is the panel configuration is different and at present it will not support more than one panel on the desktop and will not support a panel being in the top position on the lower display, which in the configuration I have with my displays is the ideal place to have just one panel that is shared between both screens. Although a panel can be placed there and used normally, the panel cannot be right clicked and configured using any of the usual menu options for configuration because the menu is truncated. The only fix being to move the panel to the bottom using the panel.conf file, stop and restart the panel and then configure at the bottom then move back to the top when finished configuring. I will have to check with the LXQt project to see if they have any fixes for this problem.

The other key issue of course is fewer widgets available yet for LXQt. The main one I do not have is the disk free space. I can simulate the CPU monitor bars like I am using on KDE (they show CPU usage, memory usage and swap usage) with some of the built in widgets. So apart from some niggling issues related to the differences between the desktop environments I expect to proceed quickly to having this system up and running again fairly soon and to be able to have a more reliable and easier to use system than I was experiencing with KDE.

Tuesday 30 June 2020

Reinstall KDE system with LXQt [1]; KDE network and display configuration extremely difficult with numerous bugs

KDE has some great reputation as a desktop environment in Linux and has won considerable plaudits. It however has numerous unresolved bugs which I have observed in the display and network configurations that are leading me to ditch KDE on one of my desktop computers in favour of LXQt. I have had three computers running KDE and one running LXQt all on top of Debian but I will be taking two computers onto LXQt and therefore only having two running KDE.

The concerning issues that are being observed with KDE mainly concern non standard configurations in both display and network, and also the inability to use x11vnc on KDE which somehow blocks it from running. The problem can be summed up as KDE not being sufficiently versatile to allow different configurations where the standard ones are insufficient. For example the VNC client designed for KDE, Krfb, will only buffer one display on a two display computer, making it inferior compared to x11vnc.

The summary of the issues experienced to date is:
  • Networking. Unable to configure a static IP address and route for a network adapter and stop the automatic configuration of an adapter via DHCP. Unable to disable a network adapter and stop it from connecting as KDE ignores the disable setting and creates another new adapter with the same settings as the disabled one (the only option then is to disable or remove the adapter at hardware level).
  • Display. Unable to save the configuration of two displays on my computer. The displays are stacked vertically. KDE is unable to remember these settings and after each reboot, defaults to side by side displays.
  • Remote Frame Buffer. As mentioned above, KDE stops x11vnc from working. Krfb will only buffer one display in a dual display system.
My networking requirement is special as the computer has two network adapters due to being required to connect with two different networks. On LXQt this is relatively easy to set up. On KDE I have experienced the problems mentioned that it keeps ignoring the manual settings. Only one of the adapters can be a default gateway because there can only be one default route to the internet. I have made it clear that the manually configured adapter will not have a default gateway but KDE ignores the instruction to use the manual settings and keeps sending DHCP requests on that network. The net result is it has become impossible to enforce the required network configuration of the computer. 

To illustrate that this is not difficult on other desktop environments, I have set up a virtual machine on the same computer which is set up in VirtualBox settings with the two network adapters with the appropriate configuration in LXQt (the VM is running Lubuntu). This has worked flawlessly without all the dramas that KDE is creating. The network configuration on LXQt on the virtual machine is implemented as the option "Shared to other computers" which allows the specification of an IP address and netmask without gateway settings. In other words that adapter does not have a gateway specified and must not send DHCP requests on the network.

Another difference between LXQt and KDE with regard to networking is the list of configuration methods and their implementation. In the connection manager that is provided with Lubuntu, which is admittedly different from LXQt on Debian, "Shared to other computers" lets a manual IP address and netmask be specified. But this information can't be put in when selecting the same configuration method in KDE. The KDE network manager in this case automatically creates its own IP address on the 10.x.x.x network and you obviously have to configure the rest of your network to match.

The key problem with KDE is it takes over too many things in a computer and makes it difficult to work around its default settings. In LXQt it is extremely straightforward to configure a non standard dual display and save the settings in a config file, extremely straightforward to install and manage x11vnc and it can be more straightforward to configure a network adapter. However ConnMan is a bit more tricky to configure than the Lubuntu network manager - the Lubuntu one is one of the best ones I have used on Linux overall.

I have started a complete reinstall of Debian 10 on the computer despite my misgivings over having to reinstall all the software (including the tricky iscan software for the Epson scanner), because KDE has such an impact on the system, that it is difficult just to put LXQt over the top of it.

Wednesday 24 June 2020

Free Linux video editors [4]: Kdenlive

So...I have been creating and editing home video for quite a number of years now. It wasn't a thing much when I was at school; schools didn't have much in the way of video equipment, and editing software at a reasonable price for the low spec PCs we had back then was almost non existent. Most of my time editing video goes back about 20 years to a multimedia course I did at CPIT as part of my DipBC qualification. There, we got to use Adobe Premiere Pro on PCs. From that, I went on to work in a school environment and Windows Movie Maker was available around that time on Windows XP. I also purchased a license for Adobe Premiere Elements and used it for a time, as well as a license for Vegas Movie Studio; both of these products are targeted at the lower end of the video editing market and are lite versions of Premiere Pro and Vegas Pro respectively. So I have a bit of experience with various low end movie editing programs. Premiere Elements was, I remember, very difficult to use, tended to crash a lot. VMS was a lot more stable once various patches had been applied, and had the capability to directly author DVDs. I also purchased a software package to write Lightscribe labels onto DVDs and used these packages together to produce official video DVDs for the school for a while.

When it's come to Linux I naturally expected it would be possible to find some reasonably good video editing software but it has taken a fair while to achieve this, which is reasonable when we consider the resources available to the open source community. In some respects this is a failing of the open source model itself, in that many projects are started and then abandoned or forked. This has happened numerous times in video editing and has been one of my frustrations early on with attempting to locate good software; examples being ShotCut and Pitivi mentioned in earlier articles in this series. Thankfully, as time has gone on and with further investigations, more high quality packages have become known to me. For the moment, Kdenlive has turned out to be the best for the type of stuff I do, and has quite a reasonable range of capabilities, few of which I have actually needed to use in my sample project.

Kdenlive, like some of the other editors such as Avidemux, is basically a GUI front end to the well known FFMpeg libraries, via an intermediary called Melt (MLT). This makes a whole lot of sense to implement the hard work via a well known and resourced existing software library rather than invent a new one in a software application. Kdenlive is thus essentially an application that makes it possible to visually assemble and preview the various source elements that will be combined into a video production. Once this stage has been completed, clicking the Render button calls up MLT/FFMpeg to do the hard work of producing the final video. Of course, this can be quite a slow process, but it can run away in the background; unlike GIMP it doesn't demand a high level of the system's resources so that everything else grinds to a halt.

I found the GUI very easy to work with and only needed to make a few references to the online documentation. The timeline worked more or less exactly the same as other editors I have used and this part was very straightforward; Kdenlive also proved to be very stable when working with it, with no crashes experienced. My very basic project consisted of editing together two large clips which each was made up by joining together a number of smaller ones in Avidemux, and as mentioned previously, the second one was rescaled from 1920x1080 to 640x480 to match the resolution of the first one. There were also some smaller clips to add to the end which were in 1920x1080 that I didn't bother to resize before bringing them into Kdenlive. Since the output resolution of Kdenlive was already set to 640x480, it did the letterboxing and downscaling of these clips itself, which is a useful capability I wasn't aware of. The rendering was also stable, and the fact it took 5 hours to complete rendering the 3 hour clip at 640x480 has not been a significant concern; what has been much more important is the fact the software has proved to be stable and reliable. So I expect this package can do everything I need and I will be interested to see where I can go with it in future. I didn't use any transitions in my project and it will be interesting to explore these in whatever type of video I might work with in future.

Monday 22 June 2020

Free Linux video editors [3]

Well I have written about free video editors for Linux a number of times, this is the third article in this series. It’s taken me a few years to get to this point of discovering good quality software for Linux because there are so many open source projects that people have started and then abandoned, and finding the really good ones, which are mostly being produced by professional companies, has taken a while.

Here is an article from Linx Magazine and it lists a few options

For my setup, AppPC which has potentially sufficient storage and RAM, will be the system that long term is set up for tasks like this. Obviously it doubles at the moment with Gimp editing mosaic projects for NZ Rail Maps but this won’t be happening forever and so, it is natural to look at using its ample resources for video production, and possibly in the future, audio editing as well.

There are some really good high quality video editors out there. Last time I wrote I was taking a look at Blender and Handbrake among others. I am currently working on a tricky project that has involved bringing together a number of clips shot in two different formats. About half of it was shot at 640x480 in MOV format and the other half in 1920x1080 MP4. The problem is melding together these two resources that are greatly different in size as well as aspect ratio. After a lot of mucking around I finally used Avidemux with two video filters to put black borders onto the original video clip (changing its resolution to 1920x1440, which gives it a 4:3 aspect ratio) and then scale this to 640x480. A little trick to remember is that video filters in Avidemux will not work unless you choose a different video mode other than “Copy”, in other words you tell it to re-render the video output. This also gives you a chance to downsample the video at the same time. The combination of these settings brought the final clip down from 14 GB to 0.5 GB in size. Handbrake was useful for converting the MOV clips into M4V but I could not work out how to re-aspect the HD clips in it so that ended up being what I used Avidemux for, as well as joining a number of separate clips into one.

So I now have two separate clips each about 80 minutes in length and the same format, to put together in a video editor, along with a few still shots. I am trying out several different software applications to see what is going to work best. Here is what I have attempted so far:
  • Blender is FOSS and has a lot of support for it, but not as a video editor. This function is not well supported in the wider community. When I tried using it, I found it was clunky and not very intuitive. This led me to look into other alternatives.
  • Da Vinci Resolve is a free version of a commercial product. It is complex to install, and wouldn’t work out of the box for me. Because of this I haven’t looked further into it.
  • Lightworks is also a free version of a commercial product. It looks promising, but so far I haven’t tried installing it.
  • Cinelerra is another of the few that are FOSS. There are two versions, HV which is the full deal, and GG. If you want to use HV, you have to compile it yourself from the source code. GG is a whole lot easier, with native packages in a repository that install directly onto Debian (and a few other distros). So I have installed it, for testing, and will see how it goes. The software looks reasonably good but the interface so far is confusing, it has also crashed once. 
  • Kdenlive is another fully FOSS package. It is officially part of KDE, and is produced as an AppImage. This makes installation very simple and straightforward as it involves just a download. So I have this installed as well, for testing alongside Cinelerra-GG. At the moment Kdenlive is the one I am trying the most.
So we will see how things go with the last two packages with this project and which option is the easiest to use and gives the best results. I hope to have this project finished by the end of this week.

Wednesday 10 June 2020

Raspberry Pi or HPTC For Livestream Player [5]

Last time I wrote on this subject I took a look at whether to continue using a Raspberry Pi to play livestreams and video clips, or to take a look at a more conventional HPTC setup with a normal computer. Original attempts to use a normal computer were with the Mini-ITX board and chassis I have, which had similar challenges to the Pi from being underpowered, and also failing to support the latest codecs, resulting in playback performance problems.

I then looked at building a HTPC using a Silverstone ML03 chassis. For a variety of reasons this has not progressed, one of those reasons being that chassis seems to be unavailable any longer. I then took at look at using a computer attached to my desk, which is a Gigabyte H110M-D3H board. This has been used for these same purposes from my desk, and I have felt it can be dual purposed, as it is doing the same things as I require from a bedside accessible PC for. This meant shifting the main playback screen from above the desk, to a location where it is readily visible from the bedside, with the computer controlled using a Logitech multimedia wireless keyboard with a trackpad.

Initial trials used only this one screen, but I eventually realised it was desirable to keep a second screen to enable the computer to double up for other work. So it now has the same screen configuration it always had, with a 1440x900 Dell screen above the desk and the 1360x768 Sony 32" monitor for playback which is visible from both desk and bed. The speakers for now have been left in the same location above the desk.

Meanwhile the front room is now fitted with an old Bravia LCD TV that I was fortunate to pick up on Trademe for $80. Since it does not have Freeview, it is set up to play from a NUC, satellite or Chromecast.

Another possible option previously considered for the Pi is to fit it into a chassis that is designed with much better heat dissipation characteristics. As discussed previously, PB Tech have these for about $20. I plan on getting one of these for the Pi as soon as possible, but even if it makes the Pi work properly, I won't be changing the bedside setup, as it is a good use of existing resources to have the computer on the desk set up for both roles. I would however have the possibility open of being ble to find another use for the Pi that means it isn't sitting round here gathering dust.

Tuesday 2 June 2020

NZ Rail Maps: Using Gimp To Georeference Retrolens Aerial Photos [7]: Extracting Mosaic Tiles

So I have been trialling the alternative overlay method with more files and have decided to redo all the mosaics for Auckland so far. This is a few days' work but as there are already some issues with accuracy in some of the more hilly areas or where the embankment is raised, it is going to bring improvements. However, file size savings are harder to nail down, and in general I think I am not seeing consistent meaningful reductions across a range of files. What I have arrived at to maximise file size saving, is to do a bulk crop to 50% of the original size, and then custom crop each layer to just the amount that is needed, that will vary from layer to layer depending on what is needed to align with the surrounding terrain (roads etc) to get smaller still. Currently this technique is being used on a big mosaic project, Auckland-Westfield NIMT, which covers 14 km of corridor and has around 180 layers. OK so there I broke my rule, by cropping the historical layers so much we can save a lot on disk usage and have more of them, but I am still having to test the water with so many layers in this project using just a small part of a very large canvas of 35 gigapixels which is probably the biggest canvas ever, as well, and that aggressive cropping is necessary to ensure the file size doesn't balloon unmanageably. Here's a screenshot of the canvas, with all the base tiles in place. The Port of Auckland can be seen upper left. This corridor was opened in 1930, the original route of the NIMT being the western line that became the North Auckland Line and Newmarket Branch dating from 1873.



This post outlines the last stage in this georeferencing process, which is to extract tiles from the mosaic project that can be imported into the GIS and used to produce historical maps that are accurate in positioning of features compared to present day maps. This being, of course, the major point of the whole exercise. There are some important considerations, the first being whether to use the same tile size, 4800x7200 in this case, as the base imagery, or combine a number of tile spaces into a larger sized tile. I tend to go for this latter option since it is just less work in extraction. The important issue here is to use the grid to get the top left corner to be the same as an existing background tile so that I can copy the coordinates from that existing tile when I need to make the sidecar files for the mosaic tiles. Sometimes there isn't one available and we will need to calculate the coordinates off the nearest available tile and here using EPSG:3857 with its coordinates expressed in metres we get a distinct advantage over a CRS that has its coordinates expressed in degrees because all you have to do, knowing the pixel resolution in metres, is make a simple calculation. In this case with this layer, at 0.1 metre pixel resolution of the base tile, each of those tiles covers an area that is 480 metres wide and 720 metres high. It then is straighforward to alter the top/left coordinates in the world file using these numbers.

The second consideration is how to name the new tiles. I always base my names on the original base tiles with a prefix. If the base tile gets scaled up or down, I also put a suffix on the original name to show that it has been resized. Let's say I have a base tile 93Y35-92S5Z that has been scaled double in each direction, I will have renamed it 93Y35x2-92S5Zx2. Then the prefix will be a single letter representing the station and followed by four digits representing the year. Then with a tile area that covers multiple original tiles I am going to use a filename that shows the first tile and the last tile in the range covered. So here is what my large mosaic tile's filename might look like: 

X1984-93Y35x2-92S5Zx2+93Y3Bx2-92S63x2.jpg

So in there we have station X in 1984, and then the range using layers that have been scaled up by 2 a side. Note the use of the + character which may not be permitted in Windows, this file name is legal in Linux which I use and which has far fewer filename letter restrictions than Windows. This is just an example - I don't know if I can cover all that large area in one file, as Gimp does have an export size limit, it may not be an efficient space to cover in one single file, and in this example we are actually not scaling 0.1 metre tiles up by two times anyway. 

My sidecar files are .jgw, .xml and .jpg.aux.xml and they all have to be given the same base name with the correct extension and we just copy the ones from the original base tile for the top left corner (93Y35-92S5Z in this example). The only change we need to make is in the jgw file - if we have scaled then the pixel resolution measurements need to be changed e.g. from 0.3 to 0.15 m. This is well described in previous posts. I described earlier in this post how I might change the top-left coordinates in the same file if I didn't have a set of sidecar files already downloaded to match the top left base tile in the area I am covering. Other posts on this blog describe the world file format in general and what those six numbers mean.

So hopefully in this series of posts I have adequately covered how to georeference the historical aerial layers in Gimp and how to get them into the GIS. Along the way I have learned some stuff as well, which is why I like to write these articles.

Thursday 28 May 2020

HOWTO: Set laptop screen brightness in Lubuntu

Some laptops have function keys that can be used to adjust screen brightness. These may not be supported on all laptops, however. In that case, there may be the controls in monitor settings or power management to adjust the brightness.

In my case I wasn't able to find anything, so I discovered how to adjust this using a command line setting. Install the xbacklight package and then invoke it with a command like this

xbacklight

By itself, this will give you the current reading.

xbacklight -set <x>

Pass some number after the -set parameter and it will set the brightness. I found 50 to be a good number after discovering the default is just 12.5.

Monday 25 May 2020

NZ Rail Maps: Using Gimp To Georeference Retrolens Aerial Photos [6]: Alternate Overlaying Technique

Today I am going to look at an alternative way of stitching together the source layers. I have to do a set of mosaics for the station of Kamo, which is the northernmost part of Whangarei. From a rail perspective there were two stations there. Just a little south of Kamo there was a coal mine which appears to have had a siding going off on the west side of the line, and although I don't know any more than that, I will include the NZR aerial images from Kamo right down to this area, since they are for continuous corridor. The idea that I am working on today is to reduce the amount of overlap between aerial layers. With every other project to date I skipped every second layer in the sequence but still found there was a lot of overlap between layers, enough to create various challenges I mentioned with overlapping, but not enough to be able to skip to every third layer. I want to look at reducing those overlaps, the potential benefits being a lot faster to overlay them in sequence, smaller file sizes and more accurate representation of the ground below with reduced perspective distortion. The way I did it before with every second layer was still overlapping quite a percentage of each layer and it being overlapped at one side, which means one side of the image will be more distorted.

In the layer sequence for Kamo down to the colliery siding the images are Survey 2788, Run A 1 to 3 and 2788 B 1 to 8. A total of 11 source layers. But I am going to use only the middle of each one of those. In addition because there is overlap between the A run and the B run, as is fairly common when the same rail corridor is covered by more than one run, because it goes around a curve and goes out of the side of the straight line of the existing run, I might be able to drop one or two layers. So what we need to do is to run Gimp with no projects open and open the source layers up using the Open as Layer command. Gimp creates a new canvas the size of the first layer and loads all the layers into it, centering them within that canvas. As all the source images are more or less the same size, they all line up with each other more or less exactly, or at least good enough for this exercise.

The obvious approach is to select a percentage of all of the source layers and apply that percentage to all of them. Using the centremost part of each layer is a good rule of thumb but you will have to check if this works with the source images you are using. But because you are using all of them, there is so much overlap everywhere that it's highly likely it will. The way to do this with Gimp is to use the Guides that you can set up from the Image menu. First of all, go to the View menu and select Show Guides. Then go to the Image menu and come down to the Guides submenu and if there are any existing guides visible and you don't know what they are then Remove All Guides. Then on the same submenu, New Guide (by percent). In the dialog select vertical or horizontal (you need guides that are at right angles to the rail corridor, but I set up both orientations to cater for both possibilities). Start with two guides at 33.33% and 66.66%.  The next step is to go through the layers one at a time and look at where the guides fall on them for example from A 1 to A 2 and see if there is enough overlap. In my case I decided to go for the middle 40% and therefore I went back, removed the guides and set them to 30% and 70% then there was enough overlap.

The next stage is to crop the layers and save them to be imported into the mosaic project. First thing to do is to create a selection that covers that middle 40%, which I do firstly by going to the View menu and turning on Snap to Guides, then I create a rectangle selection and use the guides to get the long edges into the right place. The short edges go up to the border, which we need to trim off in any case. This is why I didn't mention cropping the frame off each source layer as I did when I imported the full size layers directly into Gimp as I mentioned in a previous post. The way I am going to draw this selection is that it is a rectangle where the short edges parallel the rail corridor and the long edges are at right angles. This is how I get only the middle 40% of the rail corridor on each layer.

There are a couple of techniques for cropping in Gimp, just as a by-and-by, with different results in terms of their impact on the source layers. The obvious one to use is the Crop to Selection option on the Image menu. This will take the canvas down to the size of the selection and at the same time it will crop every layer to fit that canvas. As it happens this is the right thing to do with this project. The alternative which is useful to know if you want to keep the source layers at their full size, is to choose the option on the Image menu to Fit Canvas to Selection. That doesn't have an application to this particular project at all, but there are times when it's useful to have the full size layer available even if part of it is outside the visible canvas area and I just mention that in passing in case one day that is the situation you have, and there have been times I have used this alternative, for example when selecting a few layers from off a bigger canvas and saving them in a small canvas. Gimp will save layers that are outside the visible canvas, obviously using up more disk space than if all the layers are cropped within the canvas. Keeping the canvas just to the size you need is important because I discovered a bug in Gimp with a large canvas with only one small corner occupied that it will crash when saving. So my rule of thumb is only create the canvas size you will use and expand it as needed.

So use this Crop Image to Selection option to crop down all the layers at once. Then save to a new Gimp project file. I have 11 layers (A 1 to 3 and B 1 to 8), and the new project file has saved all of the middle 40% of 11 layers at less than 1 GB total, which looks really good so far. Now I am going to import additional base layers and my 11 segments of overlays into my mosaic project. In this case my base layers go from 93Y35-92S5Z to 93Y3B-92S63, which is a total of six columns (Y35-36-37-38-39-3B) and five rows (S5Z-60-61-62-63), so the first task is to increase canvas size by adding 36000 pixels to the top, the canvas being already wide enough for the six columns. My new canvas will therefore be 151200 x 52800 pixels since I added an extra blank row to ensure a one-row gap in the canvas between this geographical area and the one appearing on the canvas below, as I have multiple areas (different railway stations) shown on the same canvas. One thing when I started doing large projects is it doesn't matter how much of the canvas is actually used. What uses the disk space is each layer, regardless of where it sits in the canvas. I have had some very large canvases that only have an L shaped area along two sides occupied, maybe only 15-20% of the entire canvas, and there are no issues like wasting disk space because Gimp only saves the actual layers, not the canvas itself. It just saves the measurements of the canvas and the position of each layer, not every single pixel of the canvas itself. 

After resizing the canvas, use Open as Layers to bring in the base layers, being sure to first select in the Layer list the layer that you want the new layers to be inserted above, and if you want the new layers to appear in the right order then in the file dialog box select a sort order that is the reverse of the one you want. Then once the base layers have been imported and positioned in the right places in the canvas, bring in the overlays. You do this using the Open as Layers command again, but instead of selecting image files to bring in as one layer each, you select your project file that you created from cropping the overlays.  Gimp brings in all the layers from that project file into your current project individually, inserting them above the current position in the layer list in the same order as you had them in their project file.

Then you can overlay them the same as usual. That turned out for me to be a lot easier working with smaller layers, and the result was they came together in the mosaic project quite rapidly. I did change the layer order in one place, putting layer A 3 above A 2, and was able to dispense with layer B  1 because B 2 had enough overlap onto A 3, and I also found I didn't need layer B 8, so that got me down to 9 layers out of the 11 I thought I would originally need. After putting in the first layer, I can speed up the subsequent layers by just dragging them into the same size as each previous one and then do the underlap straight away, then line up the bottom edge. Even with the potential challenges of surrounding hilly terrain not lining up and significant changes in the general area - there is now a lot of residential development on the east side of the railway at the first Kamo station it was very straightforward. (Kamo had two stations both of them named Kamo, the earlier one being further south than the later one, and as both were open at the same time, the more northern one was originally named Ruatangata or Ruatangata Springs.) The amount of overlap (and therefore underlap) is much less and the narrower layer segments, for various reasons, are simply a lot easier to work with - because they are a closer representation of the actual terrain. I believe the result has fewer overlapping errors and is a more accurate representation of what is on the ground overall but this will have to be tested out in practice, and that is much more likely to be tested in mosaics covering large rail yards than with a station with a few tracks or a single track main. It's hard to work out any possible improvement in file size but as an exercise I went back to my original project, removed every second layer and the redundant layers and then saved it, the result was a file 50% larger than the original. When I saved my mosaic project in Gimp, it already had the full size versions of all the layers in Run A and Run B imported in, so it's hard to pick up the actual saving in disk space that might have resulted. I've taken out 17 full size layers and replaced them with 9 that are less than half the original size, but in their place I imported another 16 base layers. But anyway the new version of the mosaic project file is about half a gigabyte smaller than the previous version. So the whole I think there are going to be some physical benefits in resource usage as well.

So the net result of this exercise is that is how I will be doing my mosaics in future. Taking the middle 40% slice out of consecutive layers because it works out better all round. The overlaying process at this point can be described as:
  1. Align and size an overlay along the long edges of the previous overlay (does not apply to first overlay)
  2. Find the underlap point with the previous overlay (overlap of background for first overlay) and roughly align to it.
  3. Line up the bottom edge to roughly the right place
  4. Do Step 2 again more precisely
  5. Do Step 3 again more precisely
  6. Transform.
There is one more part to write and that is how to extract the mosaic tiles for the GIS. That will be Part 7 next time.

Sunday 24 May 2020

NZ Rail Maps: Using Gimp To Georeference Retrolens Aerial Photos [5]: Overlaying Multiple Layers

So in my last post I showed how to get started in georeferencing with the first overlay. In that post I put the second layer in to show how it would join onto the first layer. The aim of this part is to show how to add additional overlays and how to deal with the overlap between one layer and the next. 



Right now from the last post this is what I can see on my computer screen of the Whangarei rail yards and sidings. Whangarei has a main station yard and a separate port area with its own sidings. The port used to handle container traffic until it was all moved to Marsden Point (which is near the heads of the Whangarei Harbour, whereas Whangarei is at the top) around 20 years ago. There has been huge debate ever since over the lack of a rail link to that port, and the rail-friendly Clark Government started the process to build the spur line that would come off the North Auckland Line at Oakleigh, only 14 km south of Whangarei. But then there was an election and the anti-rail Key administration came into power and stopped all the rail work. It is now being resumed by the Ardern government and probably the construction will be approved to start in the next 12 months. Anyway those port sidings are still in place to some extent and some ships still come into the port so the maps of Whangarei will document that but we need seven overlays in the D series and two in the F series, a total of nine. I am also mapping Kamo which is to the north of Whangarei as the same series of aerial photos Survey 2788 has two runs that cover that part of the Whangarei urban area and the Northland 0.1 metre urban area aerials are also available for there. 

But for now let's have a look at overlaying D 5 onto D 3. As you can see I am not getting across much distance with each overlay. I can do a rough measurement from the grid because I know there are 4800 pixels wide and 7200 pixels high and each pixel measures 0.1 metres on a side, so with the two overlays so far covering roughly 2 high and 2 wide, so far we have covered an area 960 metres by 1440 metres, or 0.96 km by 1.44 km. It's easy from that to understand how the disk space gets gobbled up because I have to be able to work at this high resolution to get the detail visible down on the ground that I need and that means a lot of layers. This particular project file covers Oakleigh, Portland, Whangarei, Kamo and possibly Hikurangi / Waro in one canvas which is not continuous but broken into separate areas for each station, and it's currently on a 52800x108000 canvas with 69 layers and is taking up 10 GB on the disk. As I mentioned previously my rules are not to exceed 100 layers and 20 GB disk size although I do have a small number of projects that are around 30 GB but Gimp doesn't seem to be able to handle much larger than this without crashing a lot. But when covering a long urban corridor and having to split it across multiple files, then an extra complexity is to deal with the overlaps between the edges of each file. Of which more later.

One of the reasons progress is slow is that even with skipping to every second image in the NZR survey, there is still up to 50% of each image overlapped and that is a lot as you can understand. It lends weight to the idea of taking all the images in the survey and just chopping the middle out of them. That is something I might try with Kamo because I could save a lot of disk space that way because the overlapped pixels are just wasted duplication that uses up storage.

So now in Gimp I've selected D 5, put it into unified transform, and started lining the top of it up with the bottom of D 3. Now, it pays to be smart about the order in which Gimp draws layers, because there is this overlap in the case of these overlays. The base aerial photos are all nicely broken up into tiles that don't overlap, they all just nicely fit against each other in the grid, but we don't have that convenience with the overlays. So if I have three layers in the layer list the highest up one will be drawn over the top of the ones lower down in the list. And that calls for some careful thinking when doing the unified transform, because what you see in the preview is the overlay that is selected for the transform, drawn over the top of all the other layers and contradicting the order in which it will actually be drawn when the transform is completed. Here's a screenshot that illustrates that pretty neatly.


So what you can see happening in this screenshot is I've done the first stage of overlay transform. I've put the pivot right up the top of D 5 at a point that I am confident I can line up on that is also present in D 3, and then I've gone down to the bottom of D 5 and dragged a size handle to size the overlay to match the surrounding terrain pretty well. So we start by overlapping somewhere near the top of our current overlay, because that is the easiest until we get the overlay to the right size. Then once we have that sorted, we now have to "underlap" as I call it. We have to align the overlay at the bottom of D 3 because that part of D 3 will be drawn over the top of D 5 all the way to the bottom edge of D 3. You can see where the overlay transform's frame is, the handles and the pivot at the top of the frame. But down near the bottom of this screenshot, what's underneath changes to colour instead of black and white. That's where the bottom edge of D 3 actually reaches to and that's where it is most critical to actually line up D 5 because that is where the join between the two layers will actually be drawn. 

When you come to do overlays on rail tracks, these old aerial images have marks that were specially placed on the ground by NZR staff. They would paint a sleeper completely white over its whole top and then add white paint on the centre sections of the sleepers on either side. The frog rails and guard rails on turnouts are usually also painted white, but this is more to highlight them as a safety hazard when crossing over the rails in the yard, as someone could get their shoe stuck in the gap. This is what the result looked like. The purpose of this was to make it possible to overlay the aerial photos into a mosaic that was printed out on a big piece of paper that was used by the engineering staff. I have seen these big printed plans which were over a metre by a metre when printed out and some of them are in storage in archives around the country.


It is a bit harder to get the join between two layers exactly right when it isn't actually on one of the overlay edges, partly because you have to take the preview opacity up and down to check the fit, and because the overlay at 100% opacity will not show the join at all, it will look like the join is at the top of D 5 instead of at the bottom of D 3. Before Gimp was updated to 2.10.18 with the Readjust feature, it was a lot harder to do a proper alignment at these joins and one trick I used was to do the overlays starting from the bottom of the Gimp layer list instead of the top because the preview would be correct for the layer order, another trick was to reorder the layers in the layer list if the join ended up being rough. But especially since Readjust became available it is now much easier to do these joins really smoothly and that has made a big difference to this task.

So now I am going to move the pivot down to the bottom of D3 and line things up down there. At this stage I still have the transform frame at the full size of the overlay and haven't used Readjust, which moves the frame and changes its size. By bringing the preview opacity up and down and finding the exact point of overlap, I can get a very smooth join. Here the shortcut keys I have set up to step opacity up and down 10% per step are very useful to preview what the join will look like. Basically you are comparing the overlay at 100% to the overlay at 0% and looking at how smoothly it overlaps close to the join. You can get a pretty good feel for how that join will look and in the case of a rail yard or some other situation where the join is wider than a track or two, you want to be checking all the way across. Bring the pivot down to that join, put it on some key feature like a main line, and if necessary use the Readjust to change the size of the overlay at the join so you can get it looking really smooth. Just remember Readjust moves the pivot to the centre of the new transform frame, and that you can only ever reposition the overlay by dragging inside the transform frame regardless of its size.

Once you have that pivot in place and have the join looking good you need to check at the edges of the overlay again for the underlying base imagery you are lining up on, if necessary using Readjust to make a really good smooth alignment, again being aware with Gimp 2.10.18 you need to move the pivot back to the join when you press the Readjust button, then finalise by checking the join is still correct, and then we are ready to transform. In this case the most important alignment for the bottom of the overlay is the main rail line heading south, and for the sides, it is the surrounding streets and houses, because with no clear features near the bottom to ensure we have the bottom of the overlay in the right place, we have to use features out to the sides to help check our alignment. Here's the result as we have it now, with a really smooth join that I am not going to zoom in but I got it really neat again across the rail yard, but with varying results further out, as we'd expect.


And so I will carry on with the rest of the overlays to get all the way down, all nine of them.

There is one more thing to mention and that is when you are doing a continous corridor, like in an urban area. In reality, NZR had corridor surveys of various types done, with the major main line corridors completely covered, but it's impractical for me to do that myself and use all those aerial photos. So typically in rural areas I am just doing a few stations by themselves, but in the major urban areas, such as the five main centres, continuous corridor is what I do. Now it's inevitable that I will hit the layer or file size limits that I work to and such a segment of corridor will end up being split over more than one file. For example earlier on I did a continuous corridor from Newmarket to Waitakere in Auckland which is a total of 36 km that ended up going across three files in total. When you get to the edge of the current file what you do is copy the overlay that is at the edge and the underlying base layers to a new file. There are a few ways of doing this such as creating a layer group, putting the overlay and base layers relevant into it and then dragging it to a new canvas and then saving that to a new file. Another is to crop the original image down to the base layers that cover the area occupied by the edge overlay and then save to a copy of the original file. The reason for base layers being saved with the overlay is in the new file you can drag all the layers together (including the overlay) by using the layer linking in Gimp's layer list, into the grid in the new file so that the base layers are lined up properly in that grid, and then add the rest of the base layers and overlays as required to the file. This ensures you don't have to realign the overlay and also that the overlay is still exactly in the same position as it was in the original file, so that new overlays you add to the new file can be aligned to it and therefore the extracted tiles will line up perfectly in the GIS.

Part 6 is going to look at how to extract tiles for a GIS from the Gimp mosaic projects I have been creating, because that is the ultimate aim of this process.

NZ Rail Maps: Using Gimp To Georeference Retrolens Aerial Photos [4]: Getting Started With Overlaying

So now we can look at how we actually do the georeferencing. This happens as a multi part process, and explaining the steps gives you an understanding of how it all works.

The basic of georeferencing a historic aerial photo, is that by aligning it to a current aerial photo, of which we know the area that it covers, we can make the historic photo cover the same area as that current aerial photo. So this makes it easy to trace or otherwise reference features from the historic aerial photo and see a comparison with the current aerial photo. In NZ Rail Maps I use the historic photos to trace, for example, old closed branch lines, bypassed sections, and old yards and buildings at stations. This gives me a documentary record of historical features of a rail corridor and these features are displayed on the resulting map.

We do the georeferencing by using Gimp's transform tools to overlay the historical image over the top of the current image, or a grid of images (tiles), that cover the same area as the historical image. In practice we usually have multiple historic images in each Gimp project file, that cover multiple stations or yards, or a continuous length of corridor (most commonly in an urban area, where there can be lots of points of interest between stations, such as sidings). For example, in the five main centres, I have created Gimp projects that cover continuous corridors for a section each. The Auckland corridors are at present up to 10 km in each Gimp project each because they only have one historic layer. Christchurch corridors created so far are only a few km each because they have multiple historic layers from different eras. It doesn't take much distance working with originals of resolution 0.075 metres and NZR survey images of 1:4350 scale to start adding up to gigabytes of disk space in a Gimp project, when just one 4800x7200 background tile has 34 million pixels in it.

The overlaying is the tricky and slow part. We have to try to make as many features on both the current background image and the historical overlay image line up as we possibly can. For a variety of reasons this can be difficult. The major one is perspective distortion. When a photo is taken, with any type of camera, the subject is most accurately rendered where it is directly aligned to the centre of the camera's lens. At any other point of the image, the camera is effectively looking at that part of the subject from the side, and the resulting distortion gets worse the further you go out to the side of the lens. On a typical photo taken on the ground, there isn't really an issue with that. But when the image is taken for the purposes that an aerial photo is taken for, where we want to get an accurate representation of what is on the ground below, and given the distances incorporated in each photo, the distortion makes that difficult. That's a key reason why adjacent photos in a survey run are given a big overlap. You could get a really accurate aerial view by taking the centre most piece of each photo and joining it to the centre most piece of the next photo which would be easy with the amount of overlap on, for example, the NZR station surveys. I could actually do this, but so far, I have just used every second image. Because of the distortion, overlapping features that go above the ground like buildings are difficult. So are features that are on hillsides. The most challenging ones from a rail perspective are where a railway line on a raised embankment runs next to tracks that are on the flat, such as in the Dunedin railway yard where the main line is raised, and the technique for that is to have two copies of the historical image, one for the raised and one for the flat area, overlay each one just for the raised or flat features, and then overlap them using masking.

OK let's get started. Typically the layer list will show your Retrolens aerial photos at the top of the list, and the base imagery from Linz down the bottom of the list. So that your historic aerial photos are always drawn on top of the base images. Here is a typical Retrolens image that you've downloaded and imported into Gimp. First thing is to rectangle select it, and crop off the edges (shown here with the rectangle selection in place). 

Using Layer menu, Crop to Selection. Repeat this for every layer in the stack that is the same size. Change the selection rectangle if you have layers that are different sizes or have different borders. The borders being of no use to us.

Next step is to do the overlaying, one layer at a time. First, check that Snap To Grid is turned off in the View menu. It gets annoying when you are trying to precisely position a layer and Gimp is snapping it to a side of the grid. Also, there will still be a selection active from the cropping stage. Go to the Select menu and choose None to make sure there is no current selection.

Back in the day, with earlier versions of Gimp, you would have to use different tools to perform the overlay task. When I started using Gimp, I used the Scale tool to size the overlay, which doesn't have any preview capability, so I would just try scaling by trial and error until I got the correct size. As you can imagine this was very time consuming. Next step after that would be the Rotate tool, to get everything to line up. After a few months I discovered the Unified Transform tool. This is a new feature in Gimp 2.10 and it combines the function of both these tools and it offers a full preview, which lets you do the lining up. We recommend to make things easier, set up the keyboard shortcuts mentioned in Part 3 to enable you to quickly change preview opacity, canvas rotation, and zoom level.

Select the Unified Transform tool in the toolbox. Then select the first layer in the layer list and click on it. Your layer will be displayed as a preview frame, with a pivot (a circle with crosshairs inside it) in the centre and various handles at the corners and along the sides. Familiarise yourself in the documentation about how to use these handles. Now click somewhere within the layer preview and drag it to somewhere over the base imagery. Look for a feature that is common to both layers. Then drag the pivot from the centre of the preview to the common point on it. My favourites for common alignments are the ends of bridges, or a set of points, but often I use road features as well. It only has to be approximate at this initial stage. Once you have the pivot in place on the preview frame, set the opacity to zero and drag the entire preview frame so that the pivot sits over the same place on the base imagery. Then increase the opacity and confirm the pivot is properly located.

One caveat that is important when doing a unified transform. If you press the wrong key on the keyboard, it may start the transform before you are ready. This is an annoying feature in the current version of Gimp. Many's the time I have pressed the wrong key on the keyboard and the transform starts before I am ready.
In the above photo, we have positioned the preview of 2788-D-1 over the appropriate place on the base imagery, with the pivot very near the top centre, lined up on the end of a railway bridge.

Once the pivot is in place then decrease the preview opacity to about 50% and use the most convenient resize or skew handles to drag the preview to fit the underlying base imagery. To make this work off the pivot, go to the Tool Options dock and ensure that under "From pivot" the Scale option is checked. This makes sure everything else scales centered around the pivot when you drag those handles. Then resize the preview and try lining some other features up in both images. Use the opacity adjustments to compare the preview and base as much as needed and adjust the preview position and size to suit.

Youll see there are stretching (square) handles at the corners of the preview as well as in the middle of each side. But what if you need to have one in another place closer to where you are working with the canvas zoomed right up and the nearest one is out of the view? This is where the Readjust feature comes in. Press that button and you'll get a new frame right where you are viewing at the moment, instead of one around the full preview. Just note that the pivot moves too, to the centre of the new frame, and most likely you'll want to drag it back to where it was before.

 Somewhere along the way you'll want to recheck the pivot position to make sure the preview is properly lined up there as well. If you need to adjust the preview position at the pivot, then check if you need to redrag a side or corner handle to resize the preview and line the rest of it up again. I usually try having the pivot as close as possible to the top of the preview, while I am resizing at the bottom-most side. For the Retrolens layer 2788-D-1 I initially chose to line up on the railway bridge over Walton St. The bridge has been lengthened since 1975, because the road it crosses has become a  roundabout, which makes it tricky to line up on. I really need to check against something else on the ground that it will line up. And because it's raised above the ground, the tracks lower down might not line up so well. After a lot of checking, I realised I needed to move my pivot to a different location, because the southern end of the bridge was just not the right place, because the bridge was extended at that end. But the north end of the bridge was not right, either. Eventually I decided the best option was to line up on the next bridge further north (Water St) which meant adding an extra base tile at the top of the canvas and increasing canvas height.

Once you are sure you have the preview lined up as best you can over the base image, check that interpolation is set to the right setting in the Tool Options dock (I use Cubic), then click the Transform button and away it goes. When the Transform has completed, the next thing that is important to do is to lock the position of the transformed historic layer, so that it can't be accidentally dragged out of place on the canvas, because unlike the base tiles, it isn't aligned to the grid in any way.


The finished result after overlaying the first historical tile for Whangarei railway station, which is Survey 2788, run D, photo 1, over the Linz 2014 0.1m base imagery for the area. The main railway yard is about half covered by this historical image. Meaning that the join onto the next historic layer, D 3, would be highly visible as it would be across multiple tracks. One way to avoid this would be to use a different layer to start with (like maybe D 2 instead of D 1) that would push the edge out, and another way is to exploit the overlap itself and use a bit of masking to move the join to a different part of the overlap.

In this case I went ahead with D 3 without any masking and managed to get a nearly perfect overlap, as you can see below. But there have been plenty of times when it hasn't been quite so smooth.

An overlap is going to be less than smooth when there are rail wagons or service buildings across it, for the reasons I mentioned near the top of this post. And sometimes, there is simply too much distortion to make it ever work. In this case, it looks perfect across the rail yards, but further out, across suburbs and houses, there are bits of buildings cut off and misaligned, as shown below.
 
In this case, because the rail yard ended up near perfect, I didn't bother checking the result further out. The reality, as I mentioned above, is that quite often it is impossible to get a perfect alignment all the way across an aerial photo. I could have worked on improving the alignment towards the edges and then the rail yard would have been misaligned.
 
Gimp has got better over time, with the readjust feature (a result of a feature request I made on the Gimp forum) making it much easier to get perfect alignments across joins. I'll look at the challenges of doing this in Part 5.