Tuesday 30 June 2020

Reinstall KDE system with LXQt [1]; KDE network and display configuration extremely difficult with numerous bugs

KDE has some great reputation as a desktop environment in Linux and has won considerable plaudits. It however has numerous unresolved bugs which I have observed in the display and network configurations that are leading me to ditch KDE on one of my desktop computers in favour of LXQt. I have had three computers running KDE and one running LXQt all on top of Debian but I will be taking two computers onto LXQt and therefore only having two running KDE.

The concerning issues that are being observed with KDE mainly concern non standard configurations in both display and network, and also the inability to use x11vnc on KDE which somehow blocks it from running. The problem can be summed up as KDE not being sufficiently versatile to allow different configurations where the standard ones are insufficient. For example the VNC client designed for KDE, Krfb, will only buffer one display on a two display computer, making it inferior compared to x11vnc.

The summary of the issues experienced to date is:
  • Networking. Unable to configure a static IP address and route for a network adapter and stop the automatic configuration of an adapter via DHCP. Unable to disable a network adapter and stop it from connecting as KDE ignores the disable setting and creates another new adapter with the same settings as the disabled one (the only option then is to disable or remove the adapter at hardware level).
  • Display. Unable to save the configuration of two displays on my computer. The displays are stacked vertically. KDE is unable to remember these settings and after each reboot, defaults to side by side displays.
  • Remote Frame Buffer. As mentioned above, KDE stops x11vnc from working. Krfb will only buffer one display in a dual display system.
My networking requirement is special as the computer has two network adapters due to being required to connect with two different networks. On LXQt this is relatively easy to set up. On KDE I have experienced the problems mentioned that it keeps ignoring the manual settings. Only one of the adapters can be a default gateway because there can only be one default route to the internet. I have made it clear that the manually configured adapter will not have a default gateway but KDE ignores the instruction to use the manual settings and keeps sending DHCP requests on that network. The net result is it has become impossible to enforce the required network configuration of the computer. 

To illustrate that this is not difficult on other desktop environments, I have set up a virtual machine on the same computer which is set up in VirtualBox settings with the two network adapters with the appropriate configuration in LXQt (the VM is running Lubuntu). This has worked flawlessly without all the dramas that KDE is creating. The network configuration on LXQt on the virtual machine is implemented as the option "Shared to other computers" which allows the specification of an IP address and netmask without gateway settings. In other words that adapter does not have a gateway specified and must not send DHCP requests on the network.

Another difference between LXQt and KDE with regard to networking is the list of configuration methods and their implementation. In the connection manager that is provided with Lubuntu, which is admittedly different from LXQt on Debian, "Shared to other computers" lets a manual IP address and netmask be specified. But this information can't be put in when selecting the same configuration method in KDE. The KDE network manager in this case automatically creates its own IP address on the 10.x.x.x network and you obviously have to configure the rest of your network to match.

The key problem with KDE is it takes over too many things in a computer and makes it difficult to work around its default settings. In LXQt it is extremely straightforward to configure a non standard dual display and save the settings in a config file, extremely straightforward to install and manage x11vnc and it can be more straightforward to configure a network adapter. However ConnMan is a bit more tricky to configure than the Lubuntu network manager - the Lubuntu one is one of the best ones I have used on Linux overall.

I have started a complete reinstall of Debian 10 on the computer despite my misgivings over having to reinstall all the software (including the tricky iscan software for the Epson scanner), because KDE has such an impact on the system, that it is difficult just to put LXQt over the top of it.

Wednesday 24 June 2020

Free Linux video editors [4]: Kdenlive

So...I have been creating and editing home video for quite a number of years now. It wasn't a thing much when I was at school; schools didn't have much in the way of video equipment, and editing software at a reasonable price for the low spec PCs we had back then was almost non existent. Most of my time editing video goes back about 20 years to a multimedia course I did at CPIT as part of my DipBC qualification. There, we got to use Adobe Premiere Pro on PCs. From that, I went on to work in a school environment and Windows Movie Maker was available around that time on Windows XP. I also purchased a license for Adobe Premiere Elements and used it for a time, as well as a license for Vegas Movie Studio; both of these products are targeted at the lower end of the video editing market and are lite versions of Premiere Pro and Vegas Pro respectively. So I have a bit of experience with various low end movie editing programs. Premiere Elements was, I remember, very difficult to use, tended to crash a lot. VMS was a lot more stable once various patches had been applied, and had the capability to directly author DVDs. I also purchased a software package to write Lightscribe labels onto DVDs and used these packages together to produce official video DVDs for the school for a while.

When it's come to Linux I naturally expected it would be possible to find some reasonably good video editing software but it has taken a fair while to achieve this, which is reasonable when we consider the resources available to the open source community. In some respects this is a failing of the open source model itself, in that many projects are started and then abandoned or forked. This has happened numerous times in video editing and has been one of my frustrations early on with attempting to locate good software; examples being ShotCut and Pitivi mentioned in earlier articles in this series. Thankfully, as time has gone on and with further investigations, more high quality packages have become known to me. For the moment, Kdenlive has turned out to be the best for the type of stuff I do, and has quite a reasonable range of capabilities, few of which I have actually needed to use in my sample project.

Kdenlive, like some of the other editors such as Avidemux, is basically a GUI front end to the well known FFMpeg libraries, via an intermediary called Melt (MLT). This makes a whole lot of sense to implement the hard work via a well known and resourced existing software library rather than invent a new one in a software application. Kdenlive is thus essentially an application that makes it possible to visually assemble and preview the various source elements that will be combined into a video production. Once this stage has been completed, clicking the Render button calls up MLT/FFMpeg to do the hard work of producing the final video. Of course, this can be quite a slow process, but it can run away in the background; unlike GIMP it doesn't demand a high level of the system's resources so that everything else grinds to a halt.

I found the GUI very easy to work with and only needed to make a few references to the online documentation. The timeline worked more or less exactly the same as other editors I have used and this part was very straightforward; Kdenlive also proved to be very stable when working with it, with no crashes experienced. My very basic project consisted of editing together two large clips which each was made up by joining together a number of smaller ones in Avidemux, and as mentioned previously, the second one was rescaled from 1920x1080 to 640x480 to match the resolution of the first one. There were also some smaller clips to add to the end which were in 1920x1080 that I didn't bother to resize before bringing them into Kdenlive. Since the output resolution of Kdenlive was already set to 640x480, it did the letterboxing and downscaling of these clips itself, which is a useful capability I wasn't aware of. The rendering was also stable, and the fact it took 5 hours to complete rendering the 3 hour clip at 640x480 has not been a significant concern; what has been much more important is the fact the software has proved to be stable and reliable. So I expect this package can do everything I need and I will be interested to see where I can go with it in future. I didn't use any transitions in my project and it will be interesting to explore these in whatever type of video I might work with in future.

Monday 22 June 2020

Free Linux video editors [3]

Well I have written about free video editors for Linux a number of times, this is the third article in this series. It’s taken me a few years to get to this point of discovering good quality software for Linux because there are so many open source projects that people have started and then abandoned, and finding the really good ones, which are mostly being produced by professional companies, has taken a while.

Here is an article from Linx Magazine and it lists a few options

For my setup, AppPC which has potentially sufficient storage and RAM, will be the system that long term is set up for tasks like this. Obviously it doubles at the moment with Gimp editing mosaic projects for NZ Rail Maps but this won’t be happening forever and so, it is natural to look at using its ample resources for video production, and possibly in the future, audio editing as well.

There are some really good high quality video editors out there. Last time I wrote I was taking a look at Blender and Handbrake among others. I am currently working on a tricky project that has involved bringing together a number of clips shot in two different formats. About half of it was shot at 640x480 in MOV format and the other half in 1920x1080 MP4. The problem is melding together these two resources that are greatly different in size as well as aspect ratio. After a lot of mucking around I finally used Avidemux with two video filters to put black borders onto the original video clip (changing its resolution to 1920x1440, which gives it a 4:3 aspect ratio) and then scale this to 640x480. A little trick to remember is that video filters in Avidemux will not work unless you choose a different video mode other than “Copy”, in other words you tell it to re-render the video output. This also gives you a chance to downsample the video at the same time. The combination of these settings brought the final clip down from 14 GB to 0.5 GB in size. Handbrake was useful for converting the MOV clips into M4V but I could not work out how to re-aspect the HD clips in it so that ended up being what I used Avidemux for, as well as joining a number of separate clips into one.

So I now have two separate clips each about 80 minutes in length and the same format, to put together in a video editor, along with a few still shots. I am trying out several different software applications to see what is going to work best. Here is what I have attempted so far:
  • Blender is FOSS and has a lot of support for it, but not as a video editor. This function is not well supported in the wider community. When I tried using it, I found it was clunky and not very intuitive. This led me to look into other alternatives.
  • Da Vinci Resolve is a free version of a commercial product. It is complex to install, and wouldn’t work out of the box for me. Because of this I haven’t looked further into it.
  • Lightworks is also a free version of a commercial product. It looks promising, but so far I haven’t tried installing it.
  • Cinelerra is another of the few that are FOSS. There are two versions, HV which is the full deal, and GG. If you want to use HV, you have to compile it yourself from the source code. GG is a whole lot easier, with native packages in a repository that install directly onto Debian (and a few other distros). So I have installed it, for testing, and will see how it goes. The software looks reasonably good but the interface so far is confusing, it has also crashed once. 
  • Kdenlive is another fully FOSS package. It is officially part of KDE, and is produced as an AppImage. This makes installation very simple and straightforward as it involves just a download. So I have this installed as well, for testing alongside Cinelerra-GG. At the moment Kdenlive is the one I am trying the most.
So we will see how things go with the last two packages with this project and which option is the easiest to use and gives the best results. I hope to have this project finished by the end of this week.

Wednesday 10 June 2020

Raspberry Pi or HPTC For Livestream Player [5]

Last time I wrote on this subject I took a look at whether to continue using a Raspberry Pi to play livestreams and video clips, or to take a look at a more conventional HPTC setup with a normal computer. Original attempts to use a normal computer were with the Mini-ITX board and chassis I have, which had similar challenges to the Pi from being underpowered, and also failing to support the latest codecs, resulting in playback performance problems.

I then looked at building a HTPC using a Silverstone ML03 chassis. For a variety of reasons this has not progressed, one of those reasons being that chassis seems to be unavailable any longer. I then took at look at using a computer attached to my desk, which is a Gigabyte H110M-D3H board. This has been used for these same purposes from my desk, and I have felt it can be dual purposed, as it is doing the same things as I require from a bedside accessible PC for. This meant shifting the main playback screen from above the desk, to a location where it is readily visible from the bedside, with the computer controlled using a Logitech multimedia wireless keyboard with a trackpad.

Initial trials used only this one screen, but I eventually realised it was desirable to keep a second screen to enable the computer to double up for other work. So it now has the same screen configuration it always had, with a 1440x900 Dell screen above the desk and the 1360x768 Sony 32" monitor for playback which is visible from both desk and bed. The speakers for now have been left in the same location above the desk.

Meanwhile the front room is now fitted with an old Bravia LCD TV that I was fortunate to pick up on Trademe for $80. Since it does not have Freeview, it is set up to play from a NUC, satellite or Chromecast.

Another possible option previously considered for the Pi is to fit it into a chassis that is designed with much better heat dissipation characteristics. As discussed previously, PB Tech have these for about $20. I plan on getting one of these for the Pi as soon as possible, but even if it makes the Pi work properly, I won't be changing the bedside setup, as it is a good use of existing resources to have the computer on the desk set up for both roles. I would however have the possibility open of being ble to find another use for the Pi that means it isn't sitting round here gathering dust.

Tuesday 2 June 2020

NZ Rail Maps: Using Gimp To Georeference Retrolens Aerial Photos [7]: Extracting Mosaic Tiles

So I have been trialling the alternative overlay method with more files and have decided to redo all the mosaics for Auckland so far. This is a few days' work but as there are already some issues with accuracy in some of the more hilly areas or where the embankment is raised, it is going to bring improvements. However, file size savings are harder to nail down, and in general I think I am not seeing consistent meaningful reductions across a range of files. What I have arrived at to maximise file size saving, is to do a bulk crop to 50% of the original size, and then custom crop each layer to just the amount that is needed, that will vary from layer to layer depending on what is needed to align with the surrounding terrain (roads etc) to get smaller still. Currently this technique is being used on a big mosaic project, Auckland-Westfield NIMT, which covers 14 km of corridor and has around 180 layers. OK so there I broke my rule, by cropping the historical layers so much we can save a lot on disk usage and have more of them, but I am still having to test the water with so many layers in this project using just a small part of a very large canvas of 35 gigapixels which is probably the biggest canvas ever, as well, and that aggressive cropping is necessary to ensure the file size doesn't balloon unmanageably. Here's a screenshot of the canvas, with all the base tiles in place. The Port of Auckland can be seen upper left. This corridor was opened in 1930, the original route of the NIMT being the western line that became the North Auckland Line and Newmarket Branch dating from 1873.



This post outlines the last stage in this georeferencing process, which is to extract tiles from the mosaic project that can be imported into the GIS and used to produce historical maps that are accurate in positioning of features compared to present day maps. This being, of course, the major point of the whole exercise. There are some important considerations, the first being whether to use the same tile size, 4800x7200 in this case, as the base imagery, or combine a number of tile spaces into a larger sized tile. I tend to go for this latter option since it is just less work in extraction. The important issue here is to use the grid to get the top left corner to be the same as an existing background tile so that I can copy the coordinates from that existing tile when I need to make the sidecar files for the mosaic tiles. Sometimes there isn't one available and we will need to calculate the coordinates off the nearest available tile and here using EPSG:3857 with its coordinates expressed in metres we get a distinct advantage over a CRS that has its coordinates expressed in degrees because all you have to do, knowing the pixel resolution in metres, is make a simple calculation. In this case with this layer, at 0.1 metre pixel resolution of the base tile, each of those tiles covers an area that is 480 metres wide and 720 metres high. It then is straighforward to alter the top/left coordinates in the world file using these numbers.

The second consideration is how to name the new tiles. I always base my names on the original base tiles with a prefix. If the base tile gets scaled up or down, I also put a suffix on the original name to show that it has been resized. Let's say I have a base tile 93Y35-92S5Z that has been scaled double in each direction, I will have renamed it 93Y35x2-92S5Zx2. Then the prefix will be a single letter representing the station and followed by four digits representing the year. Then with a tile area that covers multiple original tiles I am going to use a filename that shows the first tile and the last tile in the range covered. So here is what my large mosaic tile's filename might look like: 

X1984-93Y35x2-92S5Zx2+93Y3Bx2-92S63x2.jpg

So in there we have station X in 1984, and then the range using layers that have been scaled up by 2 a side. Note the use of the + character which may not be permitted in Windows, this file name is legal in Linux which I use and which has far fewer filename letter restrictions than Windows. This is just an example - I don't know if I can cover all that large area in one file, as Gimp does have an export size limit, it may not be an efficient space to cover in one single file, and in this example we are actually not scaling 0.1 metre tiles up by two times anyway. 

My sidecar files are .jgw, .xml and .jpg.aux.xml and they all have to be given the same base name with the correct extension and we just copy the ones from the original base tile for the top left corner (93Y35-92S5Z in this example). The only change we need to make is in the jgw file - if we have scaled then the pixel resolution measurements need to be changed e.g. from 0.3 to 0.15 m. This is well described in previous posts. I described earlier in this post how I might change the top-left coordinates in the same file if I didn't have a set of sidecar files already downloaded to match the top left base tile in the area I am covering. Other posts on this blog describe the world file format in general and what those six numbers mean.

So hopefully in this series of posts I have adequately covered how to georeference the historical aerial layers in Gimp and how to get them into the GIS. Along the way I have learned some stuff as well, which is why I like to write these articles.