Saturday 26 January 2019

Rapsberry Pi [5]: Using the Pi as the tethered desktop

Due to the fact that my main internet feed is filtered, I sometimes need to access websites that are blocked by the filter. This is easier than trying to get a site unblocked. For example a Family First website, porninquiry.nz, is currently blocked by the web filtering, as are several misclassified Christian sites I read such as truelovedates.com and also another site called endsexualexploitation.com that I enjoy reading.
For this reason I have need of a computer that has both Wi-Fi and wired internet connections. Bedroompc is an example but it is inconvenient to use partly because its Wifi connection turns itself off after a while and can't be brought back up without a reboot. Meaning it was more convenient for me to configure it to use the network cable for everything and not use the Wifi at all, but then it wouldn't be able to be the tethered desktop, because both connections are needed for this.
So I have just finished setting up the Pi to be able to do this task, by flashing Raspbian onto it and connecting it to the same display as Win10Pc uses. Because now I have so little use of Win10Pc I have changed its display over to the Pi (Win10Pc is still connected to the VGA input on the screen, while the Pi has taken over the HDMI to DVI adapter cable) and given it the wireless mouse and Bluetooth keyboard. But with a little more work I can get it to install and auto-start x11vnc which allows me to VNC into it (using Remmina) from mainpc so I can sit in front of my regular keyboard and work in a Remmina window.

Then in the same way as I had BedroomPC set up, the wired internet connection is given a fixed IP address with no other configuration information. So that the computer can't connect to the Internet through the network cable, but we can use that cable to connect with x11vnc. Then we set up the Wifi connection to connect to the internet through the tether (one of my phones set up as a hotspot). And then it's all hunky dory. All checked out and working just fine.

Of course this means I won't be doing any more testing of Pi vs Chromecast. So I will experiment with putting Kodi onto the Elitepad tablet that I have, and test that on a TV to see if it can play well.

After using the Pi in this role it works well, but I want to be able to turn it back into a media player which conflicts with this role. I have decided that the tethered PC role can be filled by my Toshiba R500 laptop which has a docking station that it can be quickly attached to and detached from, that can be left in position on the desktop ready for it to be deployed quickly for the tethered PC role, which is just an occasional usage. The R500 also has the advantages of having its own display and being significantly more powerful and being able to run regular Debian, albeit with LXQt as the desktop environment, and Firefox Developer as the browser, which is my preferred web platform. I had a few issues with the Pi dropping out on VNC, which is probably due to known issues with the stretch standard packages of x11vnc and are fixed with buster packages. So that is where things are heading now, the Pi being set up again with LibreElec and tested as a media player

Wednesday 23 January 2019

Python Scripting [2A]; Syncing a video and music directory tree

Having completed our first scripting task involved XML extraction and copying files, the next task to be scripted in Python will be the process of extracting audio from our collection of music video tracks and syncing it into a directory tree.

The actual steps needed are:
  1. Compare two directory tree, one for video and one for music
  2. Where a music track is missing, extract the audio from a video clip in the video directory tree and save it into the music directory tree.
So it can be described in a couple of steps but the breakdown of tasks may be a little more complex.

The task involves calling ffmpeg to perform the audio extraction and the parameters may vary depending on the type of source file.  

So we will see how things proceed on this one.

Monday 21 January 2019

Python Scripting [1C]: Scripting to copy map aerial layers 3

So I spent a bit more time completing the script, which now looks like this:

import glob
import os
import shutil
import xml.etree.ElementTree as ET
tree = ET.parse('/home/patrick/Sources/CopyFiles/Source/layers.qlr')
root = tree.getroot()
for layer in root.iter('layer-tree-layer'):
        sourcename = layer.get('name')
        print("Layer: " + sourcename)
        sourcepath = "/home/patrick/Sources/CopyFiles/Source/" + sourcename + ".*"
        #print(fpath)
        for file in glob.glob(sourcepath):
            #print(file)
            destname = os.path.basename(file)
            #print(destname)
            destpath = "/home/patrick/Sources/CopyFiles/Dest/" + destname
            #print(destpath)
            print("Copying " + file + " to " + destpath)
            shutil.copyfile(file,destpath)


So there are a few commented out lines which we can ignore, the key parts are that it:
  • Imports the contents of a qlr file as XML and parses it
  • Gets the layer names from the XML and makes up a wildcard file spec to copy
  • Copies the files from the source to the destination.
This is saved on serverpc in the CopyFiles directory and is run in a shell window with the command
python copyfiles.py
And it works exactly as expected.

This is going to be much faster than the previous script because we don't need to start that computer up and run it in a remote window, and we also don't need to extract layer names from the qlr file that we get from Qgis because the script can do that as well.

So it's all go and once again I am amazed how simple and straightforward this has been. 

I have also created a second version of the script called movefiles.py, that is the same except it uses the shutil.move function to move the file instead of copying it. This is especially useful where you need to split up an existing directory's contents.
 
       
       

Python Scripting [1B]: Scripting to copy map aerial layers 2

So as I said at the end of the last post, I am learning to script in Python. Which is turning out to be breathtakingly simple and easy to do.

I remember that I was going to do this script originally in Python then ended up doing it in Powershell. Why I did that at the time, I really do not remember. I must have got my head messed up somehow trying to work out how to get started in Python because playing with simple scripts has been extremely easy to achieve.

So for today with really only a couple of hours work, here is a simple sample script that will actually get the data we are interested in, directly from the QLR layer definition file that Qgis produces with the list of layers we want to copy.

import xml.etree.ElementTree as ET
tree = ET.parse('/mnt/share/serverpc/sources/CopyFiles/Source/layers.qlr')
root = tree.getroot()
for layername in root.iter('layer-tree-layer'):
        print(layername.attrib)
So as you can see it hard codes where the file is and its name. Provided I remember to use that file name each time then it will be in that directory path which is also where the layers themselves are located. The code is basically invoking an XML parser library which it then uses to get what we need out of the source file.
The output from running that with our sample qlr file looks like this:

{'source': './9311M-92N9W.jpg', 'checked': 'Qt::Checked', 'name': '9311M-92N9W', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311M_92N9W_a169c725_6030_42d2_a1b5_89fe3fb34a28'}
{'source': './9311M-92N9X.jpg', 'checked': 'Qt::Checked', 'name': '9311M-92N9X', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311M_92N9X_544c6998_9038_4909_87c3_32e2870074b9'}
{'source': './9311M-92N9Y.jpg', 'checked': 'Qt::Checked', 'name': '9311M-92N9Y', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311M_92N9Y_7ae2ee80_c3e0_46a2_8e12_be72776f96a9'}
{'source': './9311M-92N9Z.jpg', 'checked': 'Qt::Checked', 'name': '9311M-92N9Z', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311M_92N9Z_f1e9d09b_91f2_4bb7_b928_68a290ac9331'}
{'source': './9311N-92N9W.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92N9W', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92N9W_ebf8959b_6048_4436_8dce_c21c2332f286'}
{'source': './9311N-92N9X.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92N9X', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92N9X_87f0e414_c64b_4952_81e7_66b2f723d34e'}
{'source': './9311N-92N9Y.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92N9Y', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92N9Y_c0cc8080_10e0_4868_88b0_bb10baf9de58'}
{'source': './9311N-92N9Z.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92N9Z', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92N9Z_9d9ae036_9ae9_4432_9a56_ecdc82775d5c'}
{'source': './9311N-92NB0.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92NB0', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92NB0_fdc6a14e_e0c2_4e53_ad39_b564354a5a30'}
{'source': './9311N-92NB1.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92NB1', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92NB1_4ab35f8d_caa2_4f1a_a36e_ae78b30bcf5c'}
{'source': './9311N-92NB2.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92NB2', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92NB2_196987aa_ec76_4346_b896_b808b2cc5417'}
{'source': './9311N-92NB3.jpg', 'checked': 'Qt::Checked', 'name': '9311N-92NB3', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311N_92NB3_bbf06581_e340_4d92_bed8_638bd2d18a3d'}
{'source': './9311P-92N9V.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92N9V', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92N9V_91859b63_5399_47c1_9e33_1f3a5ea3c7bc'}
{'source': './9311P-92N9W.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92N9W', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92N9W_2db6b1cd_ad46_4401_b832_3a03fbc3fb8b'}
{'source': './9311P-92N9X.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92N9X', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92N9X_09a07b9e_dad9_4843_8a3c_8af6f4b68893'}
{'source': './9311P-92N9Y.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92N9Y', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92N9Y_bc4b2eb8_be9a_4b22_9655_b77b205ad15d'}
{'source': './9311P-92N9Z.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92N9Z', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92N9Z_4b00ea57_4ac3_455f_a32a_fee8866cfc88'}
{'source': './9311P-92NB0.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92NB0', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92NB0_aef476ee_7ddd_4334_b5f3_66265650fde9'}
{'source': './9311P-92NB1.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92NB1', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92NB1_718aabaa_2d59_4dbc_a709_64b05852e0a6'}
{'source': './9311P-92NB2.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92NB2', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92NB2_0cb19185_927e_48ae_81f1_043db7f5ed06'}
{'source': './9311P-92NB3.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92NB3', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92NB3_227e337c_2dbf_4e37_be7c_0c1381e6c2df'}
{'source': './9311P-92NB5.jpg', 'checked': 'Qt::Checked', 'name': '9311P-92NB5', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311P_92NB5_6c0bddd0_b494_460a_bc52_afc923116898'}
{'source': './9311Q-92N9V.jpg', 'checked': 'Qt::Checked', 'name': '9311Q-92N9V', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311Q_92N9V_c08280e1_597b_4509_aff4_4e78278b1e22'}
{'source': './9311Q-92NB1.jpg', 'checked': 'Qt::Checked', 'name': '9311Q-92NB1', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311Q_92NB1_e6b90990_75e5_4c67_a74b_4fe86f4d4129'}
{'source': './9311Q-92NB2.jpg', 'checked': 'Qt::Checked', 'name': '9311Q-92NB2', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311Q_92NB2_8c4891e9_990e_4180_80c6_a66bb0edb97e'}
{'source': './9311Q-92NB3.jpg', 'checked': 'Qt::Checked', 'name': '9311Q-92NB3', 'providerKey': 'gdal', 'expanded': '1', 'id': '9311Q_92NB3_4c2b5052_8468_483b_8116_249ac7fb82d9'}

So we are interested in the 'name' attribute of the layer-tree-layer element, which will give us the filename to copy, which is a wildcard copy. So as you can imagine it will be a fairly short step from here to actually copying the files.

This is where I have to leave things for today. Tomorrow I might be having a day off, and not coming back to this for a couple of days, so NZ Rail Maps will return after a short interlude.


Python Scripting [1A]: Scripting to copy map aerial layers 1

In my arsenal of computers I have one low spec one that still runs Windows. Even with all the stuff I have for Linux, there are still a handful of tasks that use Windows only software, although this has diminished to the point that this computer often doesn't get turned on for weeks at a time. The reason it has been on a lot lately is solely to copy the map aerial layers. The way this works is simple. Because there is so much aerial photography that I have to download (the amount that I have so far for maybe a quarter of the maps amounts to hundreds of gigs) then I have to sift through it and pick out just the layers I need, then that list of layers gets saved into a file. The script then reads the layer list, copies just the layers needed, and then they get put into the maps project in Qgis.

The Powershell script is very simple because it just works with a text list of layer names, which are assumed to be in the hard coded directory path, and copies them to another hard coded directory path. The actual paths are network paths from the shares that the Windows computer has access to, which are hosted on one of the Linux computers. The actual script itself is stored on the same network share. Here is the script:

$layers = Get-Content "S:\CopyFiles\CopyFiles.txt"
ForEach($layer in $layers)
{
    Write-Host $layer
    $source = "S:\CopyFiles\Source\" + $layer + ".*"
    Write-Host $source
    Copy-Item -Path $source -Destination "S:\CopyFiles\Dest\"
}

Basically the first line is loading that layer list from a hardcoded file reference. The rest of the script very simply echoes each line of the script to the screen and then assembles the full file path and then performs the copy operation.

I did look at extracting layer names from the source XML file, but ended up performing this step manually each time. It will be looked at again with the new script. This is expected to be in Python, the key issue being ability to run it easily from a command line or whatever in Linux.

So, a little progress to report. After playing about with IDEs I have naturally settled for KDevelop, which of course is the native development environment for KDE, since I already run KDE on most of my Linux computers. I am using a tutorial from the below website to aid my knowledge in developing the script needed for this. It is a pretty simple script task and there is enough capability built in to Python to use XML extraction this time, result is a more sophisticated script that can automate more of the required task. So I am taking a break from the maps for the next day or two while I build and test this script, which will speed up and simplify this process.

Sunday 20 January 2019

Scanning with Linux [2]: Epson Perfection V200 Photo scanner on Debian

Last time I wrote about scanning I had done the installation on mainpc. So having a new post is because of reinstalling on mediapc and a few hiccups.

The main thing I am writing about is Epson now gives you the wrong download package for the Perfection V200 Photo scanner for Linux. If you search in their support page for V200 it comes up with Perfection V200 and gives you a page that lists a pile of download packages for Windows, Linux and Mac. The problem is there is a plugin component that is missing from that download that is needed to communicate with the scanner.

The reason I was able to install it the first time on mainpc was I downloaded VueScan and it gives a link to the correct package which can be found under the alternative name for the scanner which is GT-F670. For some strange reason known only to Epson they have a completely separate download page for GT-F670 which has its own list of packages, which includes the iscan-gt-f670-bundle-1.0.1.x64.deb.tar.gz that is what the scanner needs. Installing that package instead of the generic one from the V200 download page solved the problem. 

A support query has been lodged with Epson to see if they can address this problem but in the meantime this is how to address the problem. It may be just as applicable to another Epson model because I have found queries on the internet from other people finding similar experience with other models. They found the missing plugin package was what they needed and had to find out where they could download it from.

Wednesday 2 January 2019

Life with KDE [6]: Clean install serverpc/mediapc

As I was writing in my last post, I am working hard to reinstall both mediapc and serverpc from scratch. In both cases I was hoping to resolve hibernation problems. Both computers now have new Gigabyte GA-B250M-D3H boards in them so the outcomes should be identical.

However on resuming serverpc which has just been reinstalled with KDE over Debian 9.6 I have found that plasmashell is crashing at logon which means no user shell. plasmashell crashes are quite common and have a number of possible causes. If I want to spend some time trying to debug or reporting as a bug I could but so often many of these bugs are not resolved very rapidly, so not using hibernation is an obvious workaround.

So today I am reinstalling serverpc which will be much the same as it is before. As per usual, reusing my existing logon because of having /home on a separate disk volume, will save a lot of time and customisation but there is still a few hours of work reinstalling everything. When I remember I try to back up some of the system folders but of course I forgot to do it this time so I have a harder job. Actually on this blog I use the InstallMyPC tag for articles that give me steps to follow to reinstall the computers. Aaprt from those articles there are many other tasks that I am so proficient at that I don't need to look up any documentation, such as installing Samba and sharing the network drives on each computer.

These days I am more proficient with Linux and see less issues on my computers which in turn means less time spent reinstalling them, I have also found Debian to be the best distro overall so I am spending less time installing different distros like when I was playing with different shells and Xubuntu and stuff. My first try with KDE was a disaster because I found it so hard to adapt to, so I went back to XFCE at that point. But having made the effort the second time with KDE now I am going to stick with it. One computer will keep XFCE but everything else will run KDE now.

serverpc has the flatpak version of Gimp as do other computers I have. I expect to also use the flatpak version of Qgis as it easier to install and because it is an experimental option for the Qgis community, and that means I won't be running the development edition on serverpc now. Because I just need a reliable second option for the current version since mainpc started unexpectedly crashing just one particular project and I found serverpc with the same version of qgis was able to edit the project without any trouble. mediapc will however be installed with the development edition but I am not doing any serious work with the qgis development editions as it is too hard to run something that uses incompatible project files when it crashes and you have to use stable and it can't read the project files from the development edition.

Computing resources optimisation [2G]: Reinstall serverpc and mediapc

As we know when I upgraded everything last month I just put the disks back in with the new boards and just booted everything up and they just ran. I didn't reinstall either of the two new ones that I currently have running, serverpc and mediapc.

However as neither of these computers is able to hibernate, I have started reinstalling serverpc to see if I can get it to hibernate, and if this works, mediapc, which for some reason won't hibernate at the moment, will be reinstalled as well.

Both will be installed with KDE on Debian 9.6 , and reinstallation is not a really big job as neither of them has a lot of software installed, as those two are mainly used for specialised tasks.

Also it's been a useful time to get serverpc to be able to access all of its SSD swap space which is over 100 GB which will come in useful when doing really big jobs that exceed the 32 GB of memory that it has installed.

Hibernation in Debian seems to be quite reliable, and is enabled by default, one of my reasons for choosing it over Ubuntu or its variants, which have hibernation disabled by default. However there seem to be more issues with hibernation not working well when KDE is the desktop environment, compared to my experience with XFCE.

I would have installed mediapc first, since it doesn't have KDE at the moment, but as it doing the backups at the moment it is just easier to do serverpc first and then do mediapc later. I decided to leave serverpc's name as serverpc, and mediapc will stay with that name as well.

It is good that I don't need to enable UEFI on the two new boards as due to my negative experience with UEFI on Linux on the old serverpc, I am sticking to everything being BIOS for the time being. The exception being the NUC which remains UEFI.

The first attempt to hibernate on serverpc was a failure and the desktop environment would not load properly on resume so I flashed the BIOS with the latest version. This had the useful effect of getting rid of all the ACPI error messages at boot. It's interesting the BIOS was already at the second to latest version (Gigabyte F9) and putting it to the latest (F10) got rid of all those ACPI errors. There have been about six revisions of the BIOS for this computer and I was fully expecting to have quite an old version on this two year old board as it is so easy for the manufacturer to not bother updating the firmware at the factory and leave it to a user to fix.

To finish reinstalling serverpc I need to re-enable the root logon (which I haven't got around to doing on mainpc) to be able to get the RAID array online with my user profile so that is one step to be completed. However while resume from hibernation seems to be retrieving application data the KDE desktop environment often does not seem to be reloading properly as instead of the menus etc all you get is a black screen. This is obviously very odd and something I need to do more investigation into as regards KDE. However it seems to be a common issue and maybe it will be easy to resolve.

Tuesday 1 January 2019

Firefox Multi Account Containers vs User Profiles [3]

After a bit of playing with a few settings I am returning to my expectation of multiple FF profiles to handle container isolation-like capabilities. The problem with containers is when you do need tracking information to be exchanged between Facebook and another site. With my Wordpress blogs they are set to auto publish to a Facebook page and without being able to selectively disable the containers they will not be able to authenticate with Facebook. I have found that the settings keep getting lost or turned off probably because the Facebook cookies are getting deleted or being isolated in the Facebook container which the Wordpress tab can't get access to.

Firefox has always supported multiple user profiles. There is a built in user profile manager which you can bring up by starting Firefox with the switch -P. This then gives you a list of current profiles and the ability to create delete or rename profiles, and to select a default. In this case since I have both standard Firefox and Quantum, there are profiles for both editions (Quantum's profile is called dev-edition-default). 

After completing the process of creating profiles via this screen, you'll want to make sure you have the default one selected as the one that FF will start up when it opens, because you want to start a specific profile with a custom shortcut (and you'll probably want to have custom themes, home tabs and bookmarks in it so you can tell it apart from the others, too).

To start a browser with a specific profile, you need to ensure you have the correct profile name to pass into the shortcut. You can type in "about:profiles" in the address bar of the browser to get a list of the profiles the browser knows about. However, the list is only updated each time you open the browser and that means if necessary you have to close all windows of that browser to get it to reload. In this case I had created my profile list with regular Firefox and because Quantum windows were still open I had to close them all to get Quantum to show me the profile list.

The command to start Firefox with a specific profile is firefox -P "ProfileName" and it opens that specific profile. Plain old FF with no parameters keeps on using the default. Quantum has some way of seeing there is a dev-edition-default profile and using that instead of the one regular FF is using, which I think is the result of a setting in Quantum's preferences, that tells it to start up with that specific path. If you are using Quantum alongside Firefox the command to start Quantum will be different. I have installed Quantum on some of my computers inside my home directory, instead of a system path, so that I don't have to reinstall it each time I reinstall the system. This is also impacted by the fact we install Quantum from a TGZ file we download, rather than from our distro's repositories as a DEB package. It also ensures that Quantum is able to update itself in our user account's name (i.e with the right permissions) when Mozilla tells the browser to update, as if it was installed in the system path e.g. /usr, only root would have rights to that path.

So in this case to ensure I am starting Quantum I need pass the whole path in the shortcut. With the command line as above it indeed starts up Quantum with a completely different window that has nothing: no shortcuts, theme or extensions. I can now completely customise that profile independently of any other. What I really do need is a custom colour, or maybe changing the string in the title bar. And that I will look at next time. But for now I don't need the multi containers because that window is only being used very specifically for that blog's Wordpress account. So that is how it will happen from here. The next thing is to add a custom shortcut in KDE Menu Editor and I will create a new submenu for browser profiles as well. Well I have been setting these profiles and shortcuts up and creating icons for them out of the profile pictures I use in FB so it is going pretty well and I need to install the Tab Session Manager extension into each profile to save the tabs and the NZ Rail Maps one will need the uget extension as well to handle the downloads. And so it goes.