Tuesday 21 January 2020

Using Kodi with multiple monitors is abysmal.

Kodi is a great media player except for how it handles display settings. One of the areas that will really trip it up is when your computer has multiple monitors, especially if they have different resolutions.

What I am generally seeing in my experience with multiple monitors is that Kodi's default if it is the wrong monitor, then you go into the System settings and change the screen to the monitor you want to use, it tries to use that screen, trips over something silly like it is a different resolution from the one it was on before, and refuses to use it. 

Even if you go into the settings file and hard code the display you want it to be on, you can't get it to actually display on that screen, it will completely ignore you.

What I find really annoying is if I manage to get it to work on the correct screen, then if I reinstall the operating system, I have to go through this annoyance all over again. Every other piece of software on my computer can see that there is an existing user profile with settings in it for them to read, but Kodi insists that I have to set it up all over again.

At any rate here is where some of the settings are stored in the settings file.

You can set the monitor to be used by finding the .kodi folder in your user profile, then go into the userdata folder, open guisettings.xml and scroll down to the section starting with <videoscreen>. Then find the <monitor> </monitor> line. This should show the name of the display to be used, e.g.
<monitor>HDMI-2</monitor>

To get the names of the display outputs on your computer, open a command prompt and run
xrandr
which gives a list of the ports with their names. For example a typical output might look like
VGA-1 disconnected...
HDMI-1 connected...
HDMI-2 connected primary...

so the display name you want is one of those ones like VGA-1, HDMI-1, DP-1 and so on.

If Kodi is using its "default monitor", this line will look something like
<monitor default="true">Default</monitor>

and then you need to change it to look like the format above to use the monitor you want.

If Kodi still won't use the correct display then I have found a brute force approach to the settings can be helpful. I recommend you back up guisettings.xml first.

Edit as above except in the <videoscreen> section remove every line except the one that specifies the monitor, e.g.
<videoscreen>
       <monitor>HDMI-2</monitor>
</videoscreen> 

I have found this to work, but once Kodi is running, then naturally it may be necessary to go into the Settings section in the application and change a few of the other settings that were in there, because of the ones you deleted from the settings file.

However on other occasions I have found that Kodi will only work for the first time it runs with most of the display settings removed. After it has written some new stuff into the config file, the next time it starts up, it's problems again, like the settings it wrote to the file contradict the one that tells it which display to use. In other words, this problem can't actually be solved.

So it is simply impossible to use Kodi in certain display configurations; it won't work on any display except the default one.

Monday 20 January 2020

Google Photos: The free photo service's automatic deduplication creates challenges


Yeah, we all know what Google Photos is. Another cloud based service from Google, and one that other cloud providers like Flickr find it difficult to compete with. I was quite keen on Flickr, but they have their own issues, like that people outside the US have no rights under the DMCA, anyone can send a takedown request for a photo and they will just delete the photo without notifying the account holder at all.

I don't think Google is better but they have marketed their service. It has far fewer options than Flickr, which actually had a very good user interface for the people who owned the photos and were publishing them online. One feature that Flickr lacks is automatic deduplication, like when you upload the same photo multiple times.

It turns out that Google Photos has automatic deduplication built in, which is actually a problem when you can't turn it off. There will be times when you want it on, and times when you don't. The biggest problem with it not being able to be selected off is that when you upload a newer version of a photo, the old one won't be deleted. Instead, Google Photos links to the old one. But that means if the old one gets deleted, it will also be deleted in the new photo album as well, or maybe Google will preserve the new link. But the problem is, replacing an older photo with a newer version is potentially problematic. Flickr makes it really easy to do this, but on Google Photos you have to track where the older photo is, and then delete it, before inserting the new one. This also causes the old photo to disappear from every photo album it appears in, and its replacement won't automatically be put into those albums either. So given that in most map sets there are three different albums - separate ones for diagrams and aerials, and a combined one - the map will have to be taken out of two albums simultaneously and then manually added to both of them

I discovered this when I started to create newer versions of maps online, and the information being changed in them is the filename and the file modification time. Google only works on the photographic data being compared, not the filename or modification time. So that meant that the sort order in the photo album was being dictated by the older copies of maps, not newer ones. So I have to delete older copies of maps out of albums in order to be able to use my local script that sequences file modification times to match filename sequences so that I can remove some of the older maps to insert extra ones, which has happened more than once when creating map sequences. I renumbered some of the existing maps to put more into the sequence than existed when it was first created, so the filenames changed and so did their modification times when a script was run across them.

Of course, there would be no need to run these scripts if Google actually offered more sorting options than by file time, which is the only option provided for in the user interface. That is in fact part of the whole reason why maps are named with a three digit number - because many sorting algorithms that are pure ASCII based rather than numeric (natural sort) cannot do a proper name sort. But you can't do any kind of filename based sorting in Google Photos.

Sunday 12 January 2020

Rsync Backup System [5]: Ongoing Backup Journey

Since writing the first four articles on this topic, I have not yet figured out how to implement an incremental backup strategy, and so I am still doing only the full backup every few months. Hence right now I am doing my first full backup of mainpc for three months.

The first issue that has come up is zfs saying there are no pools available on the server that mounts the backup disks and runs the backup. This apparently is because zpool is not designed with removable disks in mind, even though they have been unmounted prior to removal. After a lot of head scratching trying to figure out how to get zfs to recognise the previously created pool on the removable disk I had inserted for the backup, I eventually stumbled across the zpool import command. Once that was issued the pool was said to be online and the zfs set mountpoint command could be issued:

zfs set mountpoint=/mnt/backup/fullbackup mpcbkup2

At that point I could go to the mount path and work with the contents of the disk.
One possibility for this issue could be that the disk is physically in a different path on the computer than the one it was created in. A key factor is that zfs appears to be designed primarily to work with /dev/sdx device paths. We already know that for regular disk mounting we can use UUIDs to get around the problem of the device path /dev/sdx changing at the whim of the operating system at boot time, which actually does happen. For some reason or other the backup disk in this instance was at /dev/sdh when it may have originally been in a zpool mounted at /dev/sdb according to the earlier articles I wrote. At any rate, using the import command can bring the zpool back to life. Perhaps there is a need to have a command that suspends a zpool when the disk is taken offline, since it appears to be necessary to do more than unmount the disk.

I looked into this further and after considering the options, the best command to use is
zpool export mpcbkup2

This command basically closes down the pool for it to be removed from the system. The system will then report that no pools are available. The next time we need to use it, it can be imported back into the system and then mounted as above. 

I also need to buy another backup disk for ensuring each major server has two full backup volumes. At the moment I don't have enough disks to assure this. It will only cost about $100 to get another 2 TB disk.

The lingering question of course is how to detect which files were backed up on which date and therefore which files need to be backed up incrementally. To make any progress with this, the very first step is to find out if rsync can log the files that have been successfully backed up, and find some way of automatically scanning the log for the names of files, and then store them somewhere (for example, an SQLite database). Another option is to find some way of having rsync only back up files that have been modified after the date of the last full backup. So far I actually haven't spent nearly any time at all thinking too much about these options, because the simplest solution by far would be to have some sort of command or script that handles this completely automatically. Linux lacks the file modification flag that is implemented in Windows (the archive bit). People explain this away by saying the FS has superior capabilities and that a file modification bit is a very crude capability, but it still isn't possible to get around the fact that being able to reset that flag after each backup, and then being able to scan for files where it has been set again, are very easy to implement. In Linux there is no easy way of being able to record that information unless you store a date somewhere for each file and then scan against those dates. I explored the possibility of writing an extended attribute for a file, the question there is where this data is stored as we do not want to modify the file itself. So I hope to spend more time over the next month or so exploring these issues further.

I am currently trialling having a log file produced and using this form of the rsync command:

rsync -arXvz --progress --delete backupuser@192.168.x.y:/home/patrick/ /mnt/backup/fullbackup/patrick --log-file=/home/patrick/rsync.log --log-file-format "mainpc|%f|%M|%l|%b|%o|%U"

The only issue to date with it is there is supposed to be a %a option but that is not being recognised so having the remote IP address logged is not available so the script has been customised to output the actual remote computer name as a literal.  The rest of the information is logged in the format and the pipe characters can be used as delimiters to separate the parameters. So I have made some progress on this issue and now the question is how to use the information to analyse what is needed for future backups.

With the backup of mainpc I wiped the disk first, but for serverpc I chose to send the data to the existing backup which means it can run a lot faster because it is not transferring every single piece of data to the disk. So that option should speed up the backup and using the --delete option will remove files on the disk that are no longer present in the source directory.

I have to set up mediapc to be able to back itself up, which will be implemented as a "backupuser" account that is logged on to in the terminal virtual console, and then runs the backup for mediapc locally with read permissions to the source files/folders.

An option for progressing the backups is simply to use the full backup disks to create the incrementals as well, but this is possibly going to need bigger disks. The advantage is that rsync can handle this by default with the incrementals pushed into a separate directory. I would want to have possibly three backup disks for each comptuer, which brings the need to buy more disks.

At any rate this will take some time to devise. Another alternative is separate incremental disks, with a second removable disk caddy installed in this computer, so that it can backup to a completely separate disk. This does have the advantage of not requiring new disks, and keeping the full and incremental backups separate.

Wednesday 8 January 2020

Python Scripting [7A]: Change File Modification Time on Map Images

This scripting project is a part of the NZ Rail Maps project. The aim is to change file modification times for a group of maps so that they follow a certain sequence when imported into Google Photos.

In the NZ Rail Maps project, maps published at what is defined as the "Basic" level are produced in two formats. These are "Aerial Maps" with the filename being in the format:
  • First character is the capital letter "A"
  • Next three characters are the numerical digits "000" to "999"
  • Followed by the suffix ".jpg"
This means the filenames are a sequence and all operating systems will sort them in sequence alphabetically.

The other format is "Diagram Maps" which is the same as above except the first character is the letter "D".

When working with programs that recognise alphabetical sorting of file names there is generally no problem. However, Google Photos only sorts according to age which is assumed to be file modification time. 

The purpose of the script is to achieve these objectives which are both achievable simultaneously:
  • Ensure that all A series maps are given a file modification time that follows the same sequence as their filename, and then follow the same process for the D series maps
  • Ensure that for each numerical sequence number, the A series map is directly succeeded by the D series map.
Example: A001.jpg, D001.jpg, A002.jpg, D002.jpg will be the strict order by file modification time after running the script.

This means where both types of map are placed in the same disk folder, Google Photos will interleave them when imported. The user can also use the alphanumerical sorting order in the file dialog box to only import one series instead of both at the same time, if the Google Photo album should only contain one type of map instead of both.

This means that in addition to discrete "Aerial Maps" and "Diagram Maps" photo albums there can also be a "Combined Maps" photo album easily.

There can be variations of the filename format such as Axxx-2015-Old.jpg so basically the steps required are extract the first letter as a letter and then the next three characters as digits and then save the rest of the filename after that but getting the exact sort order is a little tricky as it does require taking into account more than just the first four characters of the filename. All we guarantee is that the first four characters are Axxx or Dxxx where xxx is a numerical digit sequence from 000 to 999.

Tuesday 7 January 2020

First Kubuntu computer

So the NUC gets Kubuntu. The primary reason for wanting it on this is with the screen layout with vertical screens, wanting to have the taskbar in the middle between the two, with one taskbar for both. With a TV as the upper screen and the lower screen a smaller computer monitor turned into portrait mode, the logical place to put the taskbar is at the top of the computer monitor, and leave the TV with no taskbar wasting space on it. Don't have to move the mouse so far to get to the taskbar from the top screen either.

The installation was of Kubuntu 19.10 and went very smoothly. As we all know, Kubuntu is the version of Ubuntu that has KDE as its default desktop environment. It is being trialled as an alternative to Lubuntu for offering the same ease of installation whilst also giving us the full bells and whistles of the KDE desktop. How well it performs on lower spec hardware, of course, is the 64 million dollar question. Lubuntu is well known as being optimised for low spec and KDE with the bells and whistles, traditionally, has not been in the same league whatsoever. But the latest versions of KDE have belatedly addressed this concern, and so as we expect the cutting edge release of Kubuntu is likely to include the very latest stable edition of KDE, we assume it will be a bit less of a resource hog than its predecessors.

This is the first time I have installed any of my computers with Kubuntu, as I already have plenty of experience with KDE on Debian. Only a computer that already has Lubuntu would be considered for Kubuntu. Anything running Debian will stay that way as I prefer Debian over Ubuntu for most things, the only reason I am making use of Lubuntu or Kubuntu is they are different enough from plain Ubuntu and include all the missing proprietary bits that Debian omits by default. Hence easier to get going on laptops or other computers with wireless and bluetooth etc.

Once again it is interesting to try the alternatives to Xubuntu which I have now completely abandoned. This is due to it being so slow to be updated, and consequently it has a very dated look either in Ubuntu or Debian form, although some other distros using XFCE by default such as Manjaro have managed to give it a very modern appearance.

Elitepad gets a refresh...back to Windows 8

One of my least-known computers is an HP Elitepad 900 I purchased around 5-6 years ago. This is a Windows 8 tablet, which is a pretty unusual beast; MS had not invented the Surface at that stage. The Elitepad is a business tablet, which for its spec was relatively expensive, but it is designed to be solid and serviceable, with the chassis made out of metal and held together by magnets rather than being glued together as most cheap domestic tablets would be.

The design of the Elitepad for business security intentions of the design is such that it has very few ports on the tablet itself. Basically all the bits on it are the home button, power button, rotation lock switch, headphone port, camera, volume up/down buttons, microSD slot (under a cover) and dock connector. Various accessories including the charger can be plugged into the dock connector. I purchased mine with an expansion jacket, which encloses the tablet and has space inside for an optional second battery (not installed) and connects to the Elitepad's dock connector. The jacket adds its own dock connector as well as an SD card slot, HDMI port and two USB ports. The system comes with 2 GB of RAM and has a 64 GB SSD installed.

Inside, the Elitepad has an Atom Z2760 processor, and it comes with the full version of Windows 8 installed (not Windows RT). At the time I purchased it, Windows 8.1 became available and I was able to download the update from HP and install it off a pen drive. The charging adapter supplied with it provides 9 volts at approx 1 amp. This model was available with an optional SIM slot for direct connection to a cellular data network, but I chose to purchase the Wifi version.

The most use I ever had with the tablet in the past was when I was studying a few years ago and used it instead of a laptop (I actually didn't own a laptop until quite recently). I got a Logitech bluetooth keyboard and MS Wedge Touch mouse to use with it, and a third party stand for it to sit on an angle like a regular screen. I installed a few different apps on it and used it at several tertiary instutions whilst I was studying.

Recently I hauled it out of the cupboard to use as an e-book reader and found it still has 95% of its battery life remaining, according to the wear level reading using software to check the onboard battery. Because of this I intend to use it as much as possible with the Kindle app. However, after uninstalling some software, I could not get Kindle to run on it, and eventually ended up recovering it. If you haven't done a device or computer recovery before, OEM manufacturers will set up a special partition on the HDD (64 GB SSD in this case) for the reinstallation of Windows. You simply follow a few on screen prompts and leave the tablet running for a while until the recovery is complete. Prior to that I had checked if I had a chance of putting Lubuntu on it; but I discovered from a support article that it will only boot a pen drive that has Windows 8 installed.

The recovery worked as expected but it took a very long time to get it to install Windows 8.1 with the first step being more than 150 updates to Windows 8 and then several failed upgrade attempts, after which to my surprise after leaving it sitting on the charger for a few days I was presented with a Windows 8.1 installation prompt. So it looks like it is going to work well. But I will want to get  a second charger for it because of the ill considered HP design decision that it can only be charged via the proprietary docking port. As the charger will have to come from the US it will not be cheap.