Thursday 31 May 2018

OpenID sidelined for FB/Google identity lockin

OpenID is a great concept: to help you create a single identity that works across a range of Internet sites. However, it has been falling out of favour in the past few years because both Facebook and Google have provided their own replacements for it.  The reasons for these are supposedly for user convenience. However, the fact is that this convenience comes at a cost. It enables Facebook or Google to collect more information about you for their commercial purposes. It also has the downside that it gives information about you to the third party website, and in some cases a rogue site can use it to post to your social media account without your permission.

Facebook's site originally supported OpenID, but some time ago they replaced it with Facebook Connect, the API that allows you to sign in with Facebook credentials on third party sites. This is good for Facebook that they can drive people to get a Facebook account that they can also use to log into sites. However people have found they are giving too much information and in some cases security privileges to the sites that receive the oAuth token from Facebook.

I wrote this post when I learned that Blogger had dropped support for OpenID. I have never used OpenID for anything at all, but now there won't be much point in using it if all these sites are going to drop it.

The primary concern I have with using Facebook or Google credentials to log into a site is that it represents the same concerns about using a single username and password to log into a lot of sites. Most of the time this will not be an issue, but if there is a problem with your Facebook or Google login, you may find that your access to third party sites is restricted or blocked. 

My own experience is of inappropriate misuse of information released from a social media profile. I remember signing up to the Stuff website with my Facebook credentials. Nek minnit the Stuff website was using my Facebook profile photo and some other details that I didn't want appearing on comments. At that point I deleted the Stuff profile and created another one using a local login.

About the only good thing about using FB or Google credentials to log in is their security is very good and there is less likelihood of your username or password being stolen.

Saturday 19 May 2018

Viewnior, Manjaro, Lubuntu, LXQt

Viewnior is another image viewer that is included in most Linux distros. It is significantly better than Eye of Gnome (EOG) because it can handle very large images. As such I expect Viewnior will become the default image viewer on all of my Linux computers.

I got to hear about Viewnior because I recently built a VM using Manjaro Linux. This distro is derived from Arch Linux, and compared to my ultimately abandoned efforts to get an Arch VM built from the core distro files, due to the complexity of their installation task which lacks a built in installer, Manjaro has been very easy to set up (using the XFCE edition). As this edition of Manjaro includes Qgis 3.0.2 I will be using this VM to run Qgis 3 on mainpc.

Since Lubuntu 18.04 was released I have now installed two computers with it: the NUC in my bedroom, and a Toshiba R700 laptop that I have. On both of them, I installed LXQt and sddm on top of Lubuntu, since the Lubuntu community hasn't yet got a version of LXQt they are willing to release for production yet. When you install sddm on top of Lubuntu you get the KDE Plasma login screen thrown in.

I have thought about putting Lxqt onto some of my other computers but it is only up to release 0.12 and in fact has a long way to go in terms of being usable and the development seems to be very slow. For now I am just leaving it with some VMs and Lubuntu computers mentioned above. However to make serverpc look nicer I have installed sddm with the Plasma Breeze theme and put some new icon themes into XFCE to make it look cooler.

Friday 11 May 2018

Canon EOS M100 evaluation of low light performance

One area of particular technical superiority of large sensor cameras (DSLRs and MILCs) is that they have a large lens which lets more light in, the result is much better low light performance than compact cameras. With this in mind I set about evaluating the low-light performance of the EOS M100 using trains as subjects. All the photos were taken at Main South Line Bridge 7 at Opawa.

There are a few tricks to be aware of when taking photos in low light conditions. Such as:
  • Do you need a tripod to hold the camera steady?
  • Ensure you select manual exposure to balance the shutter speed and aperture and to reduce the amount of processing the camera has to do.
  • Ensure you select manual focus and prefocus the lens on the target. This is particularly important since many cameras will have difficulty autofocusing in low light.
  • You need to consider what you expect from a photo taken under low light. If the target is moving it is likely to be blurred in low light conditions.
The EOS M100 can go to ISO 25,600 which is two stops better than the EOS 600D that I also have which has a maximum ISO setting of 6400. In this case I set the maximum ISO for the camera to use on auto ISO to 6400 for the first tests and 12,800 for the rest. When shooting video the EOS M100 can only shoot at up to ISO 6400 maximum.

The camera is normally set on AF+MF setting, where if the autofocus fails to work you can just turn the focus ring to set it manually and it automatically magnifies the centre of the picture. The problem with this is the focus will only hold as long as you keep the shutter button half depressed. I had two occasions where the manually set focus was lost and the camera refused to release the shutter. When I set the camera to MF I was able to set and leave the focus without having to hold the shutter halfway.

The one video I shot came out jerky and I will need to do more work to discover the reasons for this, such as camera settings or issues specific to low light.

A selection of still shots is below:





Sunday 6 May 2018

Free Linux video editors

Well when I rip my DVDs with a software program and the music all comes out on my computer as one file then I need to split that file into tracks and I need a simple editor to do that.

Avidemux is good except for this one time where I have a file that is 1 hour 50 seconds long and for some reason the program says it is only 36 minutes long so I was not going to be able to work on all the tracks. So I had a look at some other programs. VidCutter looked good but it's useless when it comes to save the clips because it seems they all save to one file and it wouldn't save anything. LosslessCut looked good too but couldn't cope with the size of the track. I have had Pitivi and haven't tried it this time, in the past it wasn't very stable.

So I ended up just using VidCutter to mark the start and end points of each track then I worked out how to feed the numbers into ffmpeg command line to extract and convert the video. When you look at this you realise Avidemux is just a front end for ffmpeg.

Here is a typical command line for one of the tracks

  • ffmpeg -ss '00:19:15.888' -t '00:05:36.035' -i VTS_02_1.VOB -b 2M -acodec mp3 -vcodec mpeg4 afl05.mkv

    The parameters are:
    • -ss tells it to seek to a position on the input track
    • -t tells it the duration you want to extract
    • -i tells it the input file name
    • -b tells it the bit rate to use
    • -acodec is the audio codec to convert to, in this case mpeg 3
    • -vcodec is the video codec to convert to, mpeg 4 in this case
    • the last parameter is the destination file name and with the mkv extension it can work out how to package the clip into a Matroska container.
So I was able to extract all of the clips the same way and everything came out well.

Friday 4 May 2018

Lubuntu LXQT still not ready for release, but can be manually installed.

As it will be generally known, the Lubuntu community are working towards migrating their platform to Lxqt. I have been testing a whole lot of VMs running Lxqt as a desktop environment and it looks good even if it is incomplete.

I have run it on my bedroom computer using a test install but there are issues with the wireless drivers which I am unable to make work and it may be a question of trying to fix these up or moving to the stable release of Lubuntu 18.04 until such a time as I can install Lxqt on top of that as the packages are not currently available to work on top of Lubuntu 18.04.

Failing that I may have to go back to Xubuntu and see if I can put Lxqt on top of it as it will be nearly the same and should work well. The issue with Lubuntu is while they have been working on the new release they have pushed it back yet again.

I decided to reinstall with Lubuntu and then added the lxqt install from the package manager. To get it working I also had to separately install the sddm display manager. This resulted in a Plasma login screen but set to log in to the Lubuntu QT session. At the moment there is an issue with PulseAudio but otherwise it seems to work very well and the wireless even worked during installation. So it looks like apart from the odd glitch I have it working. The Plasma login screen which is really nice looking is an added bonus.

The audio indicator which also apparently stops the up and down volume controls on the keyboard working is not a big deal in that configuration because the hardware volume control in this case is easily accessible. Perhaps this is more an issue with LXQT and will be fixed in a future release. The wireless driver is a much bigger issue and to find that is properly addressed and works flawlessly is a much more major issue and I am pleased that is resolved. There may be a way I can get Kodi's own volume controls to work with the keyboard in any case.

After doing some more work on this I uninstalled pulseaudio and are now using ALSA audio controls which seems to work perfectly and the volume control buttons on the keyboard also work properly and the master volume control for the system can also be set to full which produces much more output with a quiet audio source so that is what I have been able to make work with LXQT. So everything is great now.

Tuesday 1 May 2018

Backup setup commands (btrfs, rdiff-backup, ssh)

Once btrfs-tools is installed these are the commands to create a single partition volume and add a filesystem to it. First step is to wipe the existing partition table with wipefs, then use mkfs.btrfs to create the filesystem. This is the best way to create a backup disk, even if it is for more than one PC. A single partition shared by multiple repos doesn't waste space like a separate partition for each repo would.
  • wipefs -a <device>
  • mkfs.btrfs -L <label> <device>
After creating the filesystem, use blkid to find out the UUID of the new filesystem and add a line like this to /etc/fstab 
  • UUID=... /local/mount/path btrfs compress,noauto 0 2
I use paths off /mnt/media to mount my removable disks. If you want to backup as something other than root then be sure to give permissions to the user account concerned, chown is one simple way of doing this.

With the above line in fstab the volume will not automount which is handy for a removable disk that might not be present at startup. Just type mount /local/mount/path to mount the disk when it is inserted. The mount line should ensure files are compressed when they are stored or updated but it leaves the type of compression to be chosen by the default setting of btrfs tools.


The next step we want to cover is how to set up ssh for passwordless connections to the remote computer. For my setup, serverpc is the computer that will host every backup. When I need to change disks, I can quickly shut down serverpc and then bring it back up again (the disks are supposed to hotswap but they seem to fail quite quickly so I have been given to coldswapping).

This means serverpc needs the openssh-client package installed and each target computer (mainpc and mediapc) needs openssh-server installed.

My current backup scheme assigns a pair of 2 TB disks to backup mainpc and a combination of a 2 TB disk and two 1 TB disks to backup mediapc and serverpc so there are two sets of disks for each. And that's without making use of rdiff's differential backup capability for multiple generations per disk.

Using the instructions from my previous post I need to follow these steps after installing the respective packages on each computer.

  • On serverpc create the key pair for SSH:
    • ssh-keygen -t rsa
    • The file will be saved in .ssh/id_rsa
    • Press enter to put in an empty passphrase when prompted (twice)
  • Use serverpc to ssh into the target (mainpc in this example) and create the directory there for the key to be copied to.
    • ssh user@a.b.c.d mkdir -p ssh
    • Enter yes when asked to continue connecting
    • Enter the password for that user on the remote system when prompted
  • Copy the public key from serverpc to target
    • cat .ssh/id_rsa.pub | ssh user@a.b.c.d 'cat >> .ssh/authorized_keys'
    • Enter the password for the user on the remote system when prompted
  • Ensure the correct permissions are set on the remote filesystem
    • ssh user@a.b.c.d "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
    • Enter the password for the user on the remote system when prompted
  • Test the login to see that no password is needed
    • ssh user@a.b.c.d

Finally I can do my backups with rdiff-backup. All computers need to have the rdiff-backup package installed and the same versions preferably.

Usually what I do is mount the disk and then change into the directory I want the backup to go into. Then I can use these types of commands to do the backup:

  • To backup one of the remote systems using SSH to connect to that system
    • rdiff-backup -v5 user@a.b.c.d::/path/to/files .
    • This is telling rdiff-backup to use verbosity level 5, and the files will be backed up to the current directory, assuming I changed to that directory as mentioned above.
  • To backup serverpc's local files:
    • rdiff-backup -v5 /path/to/files .
    • Again we assume the current directory is the backup path.
rdiff-backup has been generally good to use but there has been one instance (on the PC doing the backups) of for some reason the backup disk not being mounted properly. We mount disks to a path off /mnt which is hosted on the installation volume. The problem is when no mount exists, this just looks like an ordinary directory and stuff being copied to there will fill up the install disk instead of the disk that is supposed to be mounted. As I couldn't identify any files to be deleted in the path, I was forced to reinstall the OS.

As a precaution, when I reinstalled Debian, I set up a separate partition for /mnt on the backup host computer,  with a size of 100 MB.