Monday 31 December 2018

Firefox Multi Account Containers and Equivalents [2]

OK I have managed to simplify this from yesterday.

I now expect to have just single user profile in FF and do the following:
  • Window 1 will be the one on the main screen of my PC and it will be the one that mainly I am using MACs. The MACs will be geared around Google, Wordpress and Disqus which are the multiple accounts I mostly have.
  • Window 2 will be on the second display and it will use MACs specific to NZ Rail Maps. Currently I use Opera to access this stuff but I need to use FF for this.
  • Both Window 1 and Window 2 are Firefox Developer Edition (aka Firefox Quantum).
  • Window 3 is specific to certain accounts and purposes and won't need to have the MAC extension installed. It will maintain separation by using the standard version of Firefox which has its own separate user profile from FFQ.
So that makes it relatively easy to set up the MACs and use them in FFQ. I want to use Firefox for everything now especially what I used to use Chrome for in the past and also Opera. Both are still installed but won't need to be used.

Computing resources optimisation [2F]: Managing Disk Space Usage

The completion of this computer upgrade project is close to finished. I do have to complete mounting the fourth computer, which is inching along at a glacial pace at the moment, and various small tasks. One of the important ones is that mainpc's disks which have a total capacity of 1.8 TiB or 2 TB are nearly full. The main increase in capacity use is to do with mapping. This is to some extent the Maps folder and to a lesser extent Oracle virtual machines that have been used to run different software versions. Each time I download more aerial photography for maps it takes up additional space. I am satisfied the usage is appropriate so I now have to look at whether other resources can be released to make more space for various uses.

In the past I did look at moving some stuff like photos or other media to mediapc. I am not sure this is going to be easy or actually possible because right now mediapc's disks are nearly full and there doesn't seem to be an easy way of freeing up capacity. It will probably be a case of cleaning up a few folders here and there on mainpc especially where duplicated on mediapc. The Downloads and DownloadArchive folders on mainpc together total more than 300 GB and I think working through them regardless of how tedious or slow it is, is going to be worthwhile. At the very least there are large volumes of LDS downloads that can be cleaned up. It shouldn't be too hard to free up at least 100 GB relatively quickly and take some of the pressure off the fact I only have 132 GB free at the moment.

serverpc is the other major disk space gobbler with for the most part the aerial mosaics using nearly half of the total disk. With more and more mosaics being put together as the maps progress, I have to take a hard look at whether I need to keep so many files. Cleaning up downloads will also yeild space savings on that computer. With it now having 32 GB of RAM there is less demand for swap space in the past and it does have the SSD available for swap, but still it is currently down to less than 100 GB free space available. It is actually so simple and quick to make mosaics that I don't really need to keep the last three files of each one and possibly I can cut back to just the most recent file.

Cleaning all these up will also speed up backups. With mediapc becoming the backup host, I have to get it set up to run some backups which are very overdue of late.

Well as it turned out  I was able to free up 400 GB on mainpc with only a little effort and easily get that on serverpc as well but it will soon be gobbled up with more mosaics so I will have to get creative with deciding how many mosaic files I need to keep on serverpc. The backups are underway again but I am only keeping one generation of backup for serverpc and two each of the other two computers, it can be slow as a full backup can take more than a day which is why I have a dedicated computer, in this case mediapc, which is running the backup while I can keep using the other computers that are being backed up as normal. Once mediapc has finished doing the backup of mainpc it will be reinstalled with Debian 9.6 with KDE on top as I only need one computer running XFCE and that will be playerpc. In the process mediapc gets its swap put back onto using all the SSD space as there have been issues that it can't hibernate at the moment and it needs to have that swap available if it is doing big tasks with only 8 GB of RAM. I am meanwhile pushing along finishing the installation of playerpc under the desk.

Firefox Multi Account Containers and Equivalents

Since these days I now use Firefox as my main browser (partly to give the finger to Google for the data thieving Chrome browser), I need to have a look at how to access all my Gmail accounts in the same browser without having to log in and out of whichever account I am using at a particular time. The same goes for, say, Wordpress (I now have four Wordpress blogs and each has a different account associated with it), Instagram and some other services. At the moment one workaround is to use several different browsers at the same time. For example I can use Firefox Developer, regular Firefox, Opera and Chrome.

Mozilla provide the Multi Account Containers extension to help you set up the capability to handle multiples of the same website account (for example, Google accounts) in the same browser window. You can set up a container group (for example Home, Work) and then add individual tabs to that group. Each group can operate independently of each other.

As good as this is, I am still considering an alternative of setting up multiple Firefox user profiles and starting a different Firefox session on a different user profile. This would still require the container groups to be used, but each group type would be unique to that user profile and there would probably only need to be one container group type per user profile.

I have been thinking about this for months (MACs have been available for a while and were first installed on Firefox some time ago for me) but now perhaps is the time I will start testing in earnest to see if I can make these ideas work.

The multi window thing addresses mainly that I don't want a lot of tabs open in a window as would be the case with MACs all opening in one window. Being able to label the window and/or colour its title bar independently of other windows would also be a help. However, it could just be a lot simpler to classify everything using the MAC container categories sufficiently descriptively bridging what the different accounts are used for. I may still keep Firefox standard and Developer both in use with MACs.

Thursday 20 December 2018

Raspberry Pi [4]: Raspberry Pi vs Chromecast

I last wrote about my Raspberry Pi back in July, and since then it hasn't been used. I decided against using it to play videos because it had some trouble playing some of the ones in my collection. At this point I went back to playback from a computer, and subsequently pc4 was used for this role.

I am taking another look at the Pi as a media player for small group settings where you just need something that is small, compact, easy to carry around and quick to set up. At this point I am also making a comparison with a Chromecast.

Here is a quick comparison between the Pi and the Chromecast.


Raspberry Pi 3 Model BChromecast 2016
DescriptionSingle board computerStreaming media player
SOC / CPUBroadcom BCM2837 SOC, 1.2 GHz quad ARM Cortex-A53 processorsMarvell Armada 1500 Mini Plus 88DE3006 SoC, 1.2GHz dual ARM Cortex-A7 processors.
GPUBroadcom VideoCore IV?
RAM1GB LPDDR2 (900 MHz)512MB
Networking10/100 Ethernet, 2.4GHz 802.11n wireless802.11 ac (2.4GHz/5GHz)
BluetoothBluetooth 4.1 Classic, Bluetooth Low EnergyN/A
StoragemicroSD256MB flash, app based cloud
PortsHDMI, 3.5mm analogue audio-video jack, 4× USB 2.0, Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)Fixed HDMI cable 0.1 m
PowermicroUSB type BmicroUSB type B
Operating SystemRaspbian (Debian derivative) plus most Debian packages for ARM architectureUnknown proprietary
Core AccessoriesOptional case, optional power supplyIncluded power supply

The Raspberry Pi has a higher hardware spec overall. The Cortex A-53 quad processor is more powerful than the A-7 dual processor in the Chromecast; it is 64 bit capable (compared to 32 bit in the Chromecast). Power requirement for the Pi is greater if it needs to supply connected USB devices. The Pi's wifi only supports 2.4 GHz; the Chromecast is both 2.4 GHz and 5 GHz capable. The Pi can run ordinary Linux software and can store data on the microSD card and/or external USB devices; a Chromecast user does not have direct access to the small amount of internal flash storage and can only access cloud storage via apps. The Pi is a physically larger device, mainly due to the space taken up by the USB and network sockets attached to the board. If those sockets and the GPIO socket were taken off the board it would only be slightly larger than a Chromecast. The Pi can be booted off any available OS; the project homepage supplies Raspbian and also links to 3rd party OSs, including Windows 10 IOT.

I have both of these devices and have evaluated their performance for playing back videos. The Chromecast's main limitation is that it can only playback content via a compatible app. If you want to play your own videos they have to be on Google Drive or some other storage and there has to be a cast-enabled app that can play them. So far, while Google Photos was able to play my sample video clip, the video stuttered and there was no sound being played back. I now need to do some more research to discover if there are other apps available that may have better video performance and be able to play sound. The Chromecast uses your phone as a remote control for media playback, which is convenient since you don't need a multimedia keyboard. It is only possible to play back the audio via the HDMI connector to the TV; if your TV does not have HDMI audio capability or you need to connect an external audio device, you will need to purchase an HDMI splitter which costs itself more than a Chromecast. Setting up the Chromecast is easy. Plug in the HDMI cord to a socket on the TV and connect the power cable. Install the Google Home app on your phone, and it has some way of finding the Chromecast. Set up the Chromecast to connect to your wireless network and then add it in Google Home, and you're all set to go. In my case the Chromecast needed to download and install an update from Google, which happened automatically and seamlessly.

The Raspberry Pi can play back anything you want directly through its single HDMI output. There may be some issues with audio playback through the HDMI socket (due to general issues along this line in Debian) in which case the analogue audio/video socket is also available to connect to speakers. This is also an option if you don't want to use your TV's speakers. By default you would need to connect a multimedia keyboard to the Pi to control it. There are apps available for Android that can remote control various aspects of the Pi and eliminate the requirement for a keyboard. The Pi takes a lot more setup work than a Chromecast. Best outcome will be if you install one of the specialised Kodi-based distros for the Pi - either OSMC or LibreElec are choices. I have been testing both. My first attempt, with Raspbian, failed as I could not get any sound at playback. Once you have the image it has to be flashed onto the microSD card and then inserted into the Pi before turning it on. OSMC was able to play all my test videos with sound in the Kodi interface it provides. The sound came out of the HDMI and I haven't checked if there are any settings to split it to the analogue output socket.

My next steps in testing the Pi are to try out LibreElec as an alternative to OSMC to see if there is any real difference, and to see if a remote control app can be used on my phone instead of a keyboard. This is a much better option for a small media player. Another option is to fit a touchscreen to the Pi, as the board is designed to support one and they can be purchased. But I will leave that option and focus on the phone one. Next steps with the Chromecast are to try out some third party video apps and see if they can play back better than Google Photos.

Monday 17 December 2018

OpenSong, Video editing, Computing resources optimisation [2E]

OpenSong is a FOSS open source worship presentation package that is produced for Linux, Windows and macOS platforms. It was first released quite a few years ago, and is a bit out of date, with presumably no regular maintenance at present. Obviously my interest will stem from it being both FOSS and Linux capable.

I have installed OpenSong on mediapc for testing and it appears to work well. I need to dig into its capabilities to see if it is capable of handling my particular requirement for adding lyrics to worship videos, initially with the two-step process I outlined in my last post.

I am also interested in developing a one step capability in OpenSong and contributing this to the OpenSong community. Essentially this will be over time the following capabilities:
  • Playback a video
  • Project lyrics over the video
  • Record the transition steps with their timing
  • Be able to automatically play back the video with the transition steps with timings.
I have discovered that OpenSong is developed under the Xojo IDE which being an object oriented BASIC is an ideal development language for me to get involved with. I have downloaded Xojo, which is free to download, develop and test, but requires the purchase of a license for distribution. Xojo being commercially developed and supported is ideal as well, IMO. It is interesting that Xojo is also able to build for Raspberry Pi which is something I will be interested in testing. Xojo itself will be installed on serverpc which doesn't have much software installed on it and is ideal for resources. 

mediapc has been upgraded with a new GA-B250M-D3H motherboard so that it can handle video production and playback better and it now has memory increased to 8 GB. It still has the same 1 TB disks and I have to clean them up to make enough space for everything I am looking to do with it. It has been mounted on a new steel shelf under the desk on the opposite side of the desk from where the same chassis was previously used for serverpc. Moving the chassis allows the leads to reach the screens which are on that side of the desk and also means it can be permanently mounted because of the sliding cover on the chassis on the left side.

I am currently working on a new cabinet to house playerpc and the scanner which will be under the desk on the left hand side. This will be similar to the old cabinet which was on the right except one side of it will be a steel shelf mounted flat against the side of the desk to which the playerpc chassis is bolted, so the chassis will be attached to the desk like all the others. The chassis which is a smaller version of the Inwin ones the other three PCs use, will be on its side just as before, this is the only way of giving internal access, and the scanner shelf will be right above it, with the shelf removed when needed to access the inside of the computer chassis. playerpc has now got the H110M-S2H board in it with 8 GB of RAM, which gives it better playback capability than in its previous configuration. The videos are synced off mediapc currently using grsync manually run.

Saturday 15 December 2018

Free Linux video editors [2]

I have written a few times about video editing, in May this year I talked about a few free packages I had tried out that didn't work.

As mediapc now has more memory and a better hardware spec, I am planning to install Blender, Handbrake, Lightworks and ShotCut on it and see if those packages will work better than some of the ones I have tried in the past.

Mostly video editing is of worship videos, and I am gearing up to superimpose lyrics onto some clips so that they can be used in small group settings with playback on a TV. To do this I have worked out the easiest way is to make a powerpoint which shows one line at a time with multiple screens and play back the video on one computer and on the other computer have the powerpoint going with stepping through in time to the worship video and have a screen recorder recording the area of the screen where the words are displayed. Then overlaying the lyrics video onto the original with a video editor. This is a lot easier than putting the lyrics one step at a time into a video editor as a text overlay with all the fiddling around to get the timing exactly right which I think would take a lot longer to do. (But maybe I am wrong about that)

Of course worship presentation software can also do this, but you need extra hardware to capture the video on the same computer, and the software also costs a lot of money. 

It may be there is another way of doing what I am trying to do, so I will look into these options more over the holiday break and see what I can find out.

Tuesday 4 December 2018

Ubuntu Touch may step in where Firefox OS finished....maybe?

We certainly have a strong case for an alternative to Android because we should get a choice to be free of Google's massive data collection infrastructure that is tracking everything we do and everywhere we go with our phones. Revelations for example that Android 8 screen scrapes every app in the system are mind boggling. I have by choice switched my PCs to Firefox Aurora as the main browser and regular Firefox as a secondary, dropping Chrome for nearly everything, but I still use a lot of Google accounts on my computers. On the Android platform the scope is there for a much more universal collection of data by Google even as the European Union looks at more stringent penalties and regulations to force Google to cut back the scope of its data mining.

Ubuntu Touch is these days more of a community effort compared to when it was officially funded by Canonical. Firefox OS looked good for a while and it's a shame it was dropped. UT does look like it is gaining ground but of course is limited by the degree to which it can be adapted to a very large number of different hardware specifications. So far not many pieces of hardware are supported.

Or of course one can go back to the "dumb" or feature phone I guess....

Monday 3 December 2018

Computing resources optimisation [2D]

I am going ahead with this so I have been busy over the last few days installing and removing things around the place. To date I have installed the old mediapc chassis with a new board, CPU and RAM, GA-B250M-D3H with Kaby Lake Pentium G4560, and moving all three disks across from serverpc. It was a pleasant surprise for it to boot straight up especially as turning off UEFI is still possible even in these relatively new boards (although this model is about 2 years old and in fact is end of line).

The next to be treated will be mediapc going in serverpc's old chassis with another new board and CPU as above and reusing some of serverpc's RAM and keeping the mediapc disks.

The board from serverpc will be going into playerpc with some of its original RAM and keeping its existing disks. The chassis will be modified to allow it to be bolted in place like the others except sideways to the wall of the desk a distance off the floor for ventilation and a means of supporting the scanner sliding shelf to be arranged.

I was able to get wholesale pricing on everything which meant I was able to accomplish a lot more than I had expected for the same money so the result is the best outcome and better than I was originally planning as it's been possible to upgrade all three of the PCs apart from mainpc which is still pretty good even although it is about four years old and in fact, now the oldest.

So to recap we now have a H97 Haswell Pentium G, a H110 Skylake Pentium G and two B250 Kaby Lake Pentium Gs. Why take the Kaby Lake boards and CPUs when Coffee Lake is available? Coffee Lake Pentium G and Celeron CPUs are in very short supply worldwide and therefore the Coffee Lake board would have had to be fitted with a Core i3 CPU. These are basically double the price of a Pentium G although the performance is somewhat higher, it could not be justified for the extra money. 

It is interesting Kaby Lake Pentium Gs have two cores and four threads, whereas that Haswell that I have would only have two cores and two threads. So the Kaby Lake Pentium G is basically specced about where a Core i3 was a few years ago. However the graphics performance has been downgraded somewhat in the Pentium G, certainly a lower processing frequency. In my case the amount of times you would see the CPU maxed out is rare so the higher cost simply can't be justified in any way. 

All of the systems are now using Intel graphics in Linux. The Ivy Bridge system used to have a NVidia two head card made by Gigabyte which cost only $50 and provided for two digital displays as the board only had a VGA and DVI output and only supported one display onboard from memory. This system was the last one on my desk using a non Gigabyte board, back when I was still buying Intel made boards. Apparently Intel doesn't make desktop boards any more. This board CPU and RAM are now looking for a new home around here but I have plenty of spare chassis it could go in. The NVidia cards were OK until nouveaux drivers started causing problems of various sorts and because Intel stuff is better supported and on all the boards can drive two or three displays it suits my setup to go onto Intel although Google Earth still seems to have an issue with mainpc's particular onboard graphics.

Wednesday 28 November 2018

Linux video driver issues

One of the challenges with Linux is that bug fixes can be slow to be developed and there can be issues when libraries are updated in a component of a distro in a different time cycle from other parts and therefore break software. For example, in Qgis there is a version of it that depends on a library package called Ubuntugis which is a specific branch of the Ubuntu community. Problems occur from time to time when Ubuntugis packages are updated on a different cycle from ubuntu itself or Qgis.

I have had NVidia graphics cards in several computers for a number of years. When I was still running Windows I had a quad head NVidia card in mainpc and although NVidia does produce their own drivers for Linux, there were issues with these that led me to take this card out and instead have a 2 head Gigabyte NVidia card costing $50 using the open source Nouveaux driver instead. However this seems to be difficult to keep updated and cause problems with other software, notably Google Earth. The latest issue was when an update to some of the xorg components (the basis of the X11 graphics package) had problems with the Nouveaux driver preventing a screen from being rotated.

Fortunately Intel graphics are well supported with Intel themselves being willing to develop open source drivers for Linux and as a result I was able to remove the NVidia card from mainpc and just rely on the onboard graphics which also of course lets me use the later versions of Google Earth.

Thursday 22 November 2018

Computing Resources Optimisation [2C]

Still looking at this, really just fine tuning expectations / specs. Last time I talked about task separation. Running stuff on mediapc is still a test case really. What has turned out to be important however is limitations on mediapc caused by it having an old CPU that is slow to decode some video formats. This led to problems recently with video playback from the Vimeo site - there was an issue with bandwidth limits on my internet feed, but when the limits were eased I still had problems with video dropout, and since the NUC had no problems at all, I determined it was most likely the hardware spec of this computer, which is the oldest full size computer out of the three I use daily. Although there are also two mini-ITX computers that would have even more issues. 

However the present mediapc is perfectly OK when playing back most of the formats that I have saved videos into on my computer - generally MP4 or MKV. For this reason it will be OK as a playback only computer slotting into the "pc4" role. It might get called "playerpc" or something new.

If this optimisation happens this is the summary of how the computers will change:
  • mainpc stays as is now
  • mediapc's chassis has the board removed and put into pc4
  • A new board, CPU and RAM are added to mediapc's chassis and it gets called "serverpc" or maybe "secondpc" or "graphicspc".
  • The present "serverpc" becomes the new "mediapc" with the only change being the disks taken out and put into secondpc, and then it gets the disks from the old mediapc. This computer being a Skylake has a good enough spec to be able to be considerably superior to the present mediapc which is only an Ivy Bridge. With 16 GB of RAM it can be used for a lot of things. It will continue to be used for backing everything up.
  • The compact chassis gets the old mediapc board but keeps its existing disks. 
Anyway of course we remain to see if these plans progress in any way.

The original proposal for the new system architecture gravitated at the time to a Coffee Lake system. However the Coffee Lake low end CPUs are in very short supply worldwide (Celeron and Pentium G models). I have not bought a Celeron for many years as the slightly lower cost has in the past not been worth the hit in performance, although the gap has closed in more recent times. The Pentium G which would be a Core i1 if there was such a thing, has become even more powerful, now more or less what a Core i3 used to be. This is absolutely typical of Intel: they keep shifting the goalposts; often times they will be trying to trick people into buying a CPU just on name like an i3 or an i5 or an i7 and then these people will discover that their especially cheap Core i series is a crippled version because there were some models produced that had a lowered spec to aggressvely compete. This time fortunately the goalposts have shifted the other way and an i3 is now a higher spec that originally was the case and a Pentium G is now 2 cores 4 threads instead of 2/2 as all the other Pentium Gs that I have are (which is all three of my computers that have a microATX board in them).

So because of the extra costs for a Coffee Lake system and now the i3 CPU which was the lowest spec I could get being out of stock at the moment, I took another look and came up with a Kaby Lake system with the Gigabyte B250M-D3H board because the Kaby Lake Pentium G is still readily available, as I suspect it will be until Coffee Lake Pentium Gs become more abundantly available. Although the RAM for this option is slightly dearer the saving in the CPU more than cancels this out resulting in about $80 less cost overall.

However as the Ivy Bridge system is slated to become a media player and actually has trouble playing back some formats, there is another way to use that $80 and that is by rejigging things a bit. If the new system only gets 16 GB of RAM and reuses the 16 GB from the Skylake system, then replacing the Ivy Bridge outright with a new Kaby Lake system as well as the new Kaby Lake system mentioned at the start of this sentence and giving one of the Kaby Lakes and the Skylake 4 GB each would sneak in for a similar amount as the original spec. Then the Ivy Bridge board and CPU could be sold or kept in reserve.

Wednesday 14 November 2018

Scanning with Linux

As we all know, I have gradually moved lots of things I do from Windows to Linux over time, but still have a computer running the Home edition of Windows 10. One of the few things that still runs on Windows 10 is my Epson Perfection V200 Photo scanner. Well it did work well on Windows 10 until the recent update, and now as it turns out, lots of Epson scanner owners are finding they are having scanning software problems on Windows 10.

Although I was able to get my scanning done eventually with a lot of mucking around, I have decided not unnaturally that maybe it is time to take a look at scanning on Linux. The last time I looked at this was over a year ago, and at the time, the options didn't look very attractive.

This time around I decided I would take a look at a shareware package called VueScan, which is cross platform, being produced for Windows, macOS and Linux. It was easy to install the basic package on Linux. It has then told me I needed a driver, and provided a link to the Epson website. Much to my surprise I found a deb package available to be downloaded for Debian/Ubuntu 64-bit.

I then found that SANE needed to be installed so I looked up the Debian scanning page and ran the tests on it, which mostly seemed to work (scanimage -L didn't seem to work however). I then ran the install.sh script to install the Epson .deb package, which seemed to install OK. 

After turning the scanner on and off again, I found that ImageScan (the Epson software package) and VueScan both came up and detected the scanner OK. 

I have tested so far with ImageScan but not yet VueScan. Both seem to be able to acknowledge the scanner's film and print scanning capabilities, but so far I have not tested this out. A basic document scan with ImageScan worked well.

This should work out a lot more conveniently but I do remain concerned about being dependent on Epson producing and updating its drivers for this scanner in the future, because I imagine eventually they will stop supporting it. The joys of owning hardware. At least with the printer the HP PCL print language is so widely supported with even very ancient printers able to continue to be used forever.

Wednesday 7 November 2018

Computing resources optimisation [2B]

Last time I wrote a post in this series it was looking at upgrading some of the hardware that I have. This time it is looking at task separation, or in another way, using mediapc to do some stuff like I have looked at previously.

This keeps coming up because I have four computers and one of them hardly gets used. Granted there is a case for using PC4 to evaluate new editions of Debian but I don't have anything to run on it. If I make it the media playback computer then mediapc is redundant.

So the idea of using mediapc to do some of the day to day stuff such as email, perhaps with just a single screen, is gaining some traction at the moment. If I follow through on this idea then pc4 will get the second screen currently on mediapc for playback purposes and will be set up to play stuff. However I expect the master media library will stay on mediapc with its RAID array for now. We will just sync as we do with the other computers that play media.

mediapc will have to be set up to hibernate and may need a reinstall of debian because with sddm running on top the hibernation is not very reliable (it works well with lightdm).

The result is that mainpc would be set up just for maps alongside serverpc. There is quite an advantage for mediapc in that it would have one screen dedicated to stuff like email (there is no new mail widget in KDE at the moment that can tell me when a message has come in) and can be left with that up all the time so I can just glance at it. At the moment I can only monitor new messages with my phone.


The tricky part is I can't use a third keyboard and mouse easily in the current layout (although mediapc's keyboard is accessible, it isn't ideally placed for typing) and so it may end up that mediapc would be remoted with Remmina or else I just press a button when I need to switch the main keyboard onto it.

Some testing is going to start fairly soon before I have too many more crashes on mainpc but it is tricky because all the resources are on mainpc and I can't see myself moving files across to mediapc so there will be a disadvantage for it in terms of accessing stuff over a network to some degree.

Monday 5 November 2018

Qgis functions for labels

One of the great things about Qgis is being able to use expressions in a range of things. This is just a little bit about using a function to combine various fields together to make a caption on a label of a feature.

In NZ Rail Maps I have features called Locations which are a range of things - stations and mileposts are the main ones.  These can have a name, and up to two distances attached to them. There are varying requirements depending on the type of location and the label caption is made up of seven fields combined over three lines. This is the function that is used to create that label.

if(coalesce("caption" ,'') = '', '', "caption") ||
if (coalesce("Distance",-1) = -1,'', if(coalesce("caption" ,'') = '','','\n' )|| trim(concat("Distance",' ', "Unit", ' ', "Label"))) ||
if(coalesce("Distance 2",-1) = -1,'','\n' || trim(concat("Distance 2",' ', "Unit 2", ' ',"Label 2")))

Basically it consists of the output of three conditional evaluations (if() statements) concatenated together. Each of these if statements is responsible for one line of the label.

I'll break it down line by line

if(coalesce("caption" ,'') = '', '', "caption") || 

So that first line is pretty simple. It checks to see if the caption has anything in it. The coalesce() function is used to overcome null propagation, in which if the caption was a null, that null would propagate through the entire function and result in a only null being returned regardless of the rest of the function. So the "caption" field is output if it is non null. The || on the end simply adds the result of the next if statement to the label caption.

if (coalesce("Distance",-1) = -1,'', if(coalesce("caption" ,'') = '','','\n' )|| trim(concat("Distance",' ', "Unit", ' ', "Label"))) || 

So the first thing we do for the second line is to see if its first field, Distance, is empty (null), again using coalesce(). This time, because it is a numeric field, we use a number for the comparison, instead of a string. If there is a number in there, we use a second if() statement which looks like the first line evaluation of the "caption" field and is intended to make a decision about whether to put a newline character into the label. This ensures that the first line of the label is not a blank line if there is nothing in "caption". Because while a station will have a caption (its name), a milepost will not be named, instead it will start with a distance. The rest of the statement just adds on the "Unit" and "Label" fields to the second line, and the last thing is the double pipe to concatenate the third line on.

if(coalesce("Distance 2",-1) = -1,'','\n' || trim(concat("Distance 2",' ', "Unit 2", ' ',"Label 2")))

The last line is essentially a repeat of the second line, using different fields which are duplications of the ones used to make up the second line. It is simpler because it assumes that there is always a line above it that has already been filled by the previous code and therefore it doesn't need to check before adding the newline character to move its output down onto the third line of the label.
It took me a lot of work to get this function and after years of using a simpler function that would put out extra blank lines unnecessarily I just wrote the new function last night. Here is the old function:
if(coalesce("caption" ,'') = '', '', "caption" || '\n') || trim(concat("Distance",' ', "Unit", ' ', "Label", ' ',"Distance 2",' ', "Unit 2", ' ',"Label 2", ' '))

That one only uses two lines and the second line will always be present, but will be blank if the fields are empty. You can have stations for which you don't know a distance, so that second line is optional. Hence in the new function we have code that checks field values to see if the second and third lines are needed, and if they aren't, nothing is output, rather than a blank line.
The reason why I haven't had this function up to now is because I hadn't realised it would be possible to concatenate multiple if() statements together. If everything had to fit in one if() statement that statement would be extremely complex. I know from writing functions in spreadsheets that you can do a lot of complex stuff with if() statements already but having multiple nested statements makes for a very complex, hard to understand and debug piece of code, and because of that, since I formally studied programming at CPIT more than 20 years ago (which built on my long experience from high school), I discovered how to avoid complex nested conditional statements and simply break all those conditional evaluations up into multiple sequential statements.

So the new label function took a couple of hours to test and debug and get all the bugs out. There are a couple of things that make it easier. One is to code the function in a separate text editor, rather than directly in the Qgis evaluation editor. The other is to test one piece at a time in the editor because its syntax evaluator isn't very helpful and doesn't highlight exactly where a problem is. So for this function, each of those three statements was coded in a LibreOffice Writer document, and then pasted one at a time into the Qgis editor to test it.

Monday 15 October 2018

Computing resources optimisation [2A]

So after a couple of months ago when I did some resource optimisation with my existing computers, it's time to look at the next stage. That stage will be upgrades to enable all four computers to be used for the most stuff they can.

The plan I am considering at the moment (subject to resourcing) is:
  • serverpc gets a new mainboard with 4 RAM slots and new CPU and keeps the 16 GB of memory and we add another 8 GB to it, so the new motherboard would have to be compatible with the memory currently in serverpc as of now. serverpc gets renamed as graphicspc or mapspc as well.
  • mediapc gets serverpc's old mainboard and 4 GB of RAM that I will have to add specifically to mediapc. One advantage is mediapc will have a spec that can play Google VP8/VP9 codec stuff which is not currently the case.
  • pc4 gets mediapc's old mainboard with the 4 GB of RAM that it has on it. pc4 will be faster but with less RAM so it ends up doing pretty much what it does now, just better.
So I'm looking at the costs etc of that. All four computers will probably use Intel Graphics without issues with the NVidia cards and their drivers. They all have enough ports to be able to do two displays for the three computers that have a dual display setup.

I also need to look at whether I have enough backup disks at the moment to back up all three computers. The disks in all of the computers will remain the same as at present. I don't have any desire to increase any of the disk capacities in the near future, mainpc and serverpc have 2 TB each and mediapc has 1 TB and cleaning stuff up will be first call. The maps are growing but the stuff is really going onto serverpc more than it is onto mainpc and there is a lot of space that can be freed up on serverpc if I need to.

The plan is to give serverpc ability to go up to 32 GB RAM eventually and the numbers wouldn't have worked if we couldn't have reused the existing memory in it. Because even 16 GB of RAM would have cost $300 if I had to buy new RAM for the mainboard which compares with putting the board into another computer and putting 4 GB of RAM into it at $85. So the saving from being able to reuse the existing RAM is enough to be able to add another 8 GB to this computer.

The existing board in pc4 will be a spare for the Win10PC which is using the same type of Mini-ITX board. I have no other use for these boards although they will fit any standard ATX chassis so if the Antec Mini-ITX chassis had a power supply failure or whatever the board could go in another chassis.

What's interesting about computer hardware at the moment is that Moore's Law has slowed down a bit. Intel has hit the limit of 14 nanometres for five generations of CPUs. The four computers will have the following specs:
  • mediapc: Ivy Bridge, 22nm, Intel DB75EN, Pentium G2120 
  • mainpc:  Haswell,  22 nm, Gigabyte H97, Pentium G
  • serverpc: Skylake, 14 nm, Gigabyte H110M-S2H, Pentium G4400
  • new: one of the following depending on what I can get
    • Kaby Lake, 14 nm, Gigabyte H270M-D3H, Pentium G4560
    • Coffee Lake, 14 nm, Gigabyte B360 D3H, Pentium G5400
 But the new Coffee Lake and Whiskey Lake which are the next two generations of CPUs will still be 14 nm. And the 14 nm architecture started with Broadwell which is between Haswell and Skylake.

 So at 14 nm there is Broadwell,  Skylake, Kaby Lake, Coffee Lake, Whiskey Lake and then finally there will be the new 10 nm architecture on Cannon Lake, Ice Lake and Tiger Lake, etc, etc, etc. Usually with Intel you will only have two generations (tick and tock) on the same architecture so they must have had some difficulties getting to 10 nm.

With this optimisation (assuming it happens) I am also getting away from the idea of being able to use three or four computers for maps. The idea that mediapc could do some stuff has been dropped in favour of adding one of its screens to serverpc and using four screens across two computers and doing all the map work on those four screens. So the case is there to pump serverpc up to more memory.

Sunday 14 October 2018

Graphics issues with map mosaics [3]

Continuing from the previous post in this series. Once you have made up some mosaic tiles you need to know how to edit the sidecar files. In this case with the map tiles that come from LDS there are three sidecar files alongside the actual image. We only need to concern ourselves here with the .jgw file:
  • jgw file: this is called the World File and contains six lines of text. The values of these in order are:
    • Line 1: A: x-component of the pixel width (x-scale)
    • Line 2: D: y-component of the pixel width (y-skew)
    • Line 3: B: x-component of the pixel height (x-skew)
    • Line 4: E: y-component of the pixel height (y-scale), typically negative
    • Line 5: C: x-coordinate of the center of the upper left pixel
    • Line 6: F: y-coordinate of the center of the upper left pixel
  • If we have tiles that are double the resolution in both directions but otherwise identical we just need to change the first four to match the pixel size.
  • In this case the world file for the upper-left most tile out of the four we have in the file contains the following values:
    • 0.400000000000000
    • 0.000000000000000
    • 0.000000000000000
    • -0.400000000000000
    • 18835200.199999999254942
    • -5627520.200000000186265
  • We just need to change the first line to 0.2 and the 4th line to -0.2 to say that each pixel is half its previous size.
 When you open that tile as a raster in Qgis it looks for all the sidecar files. The other two files amongst other things will tell it the CRS (EPSG:3857 is the one I use for all NZ Rail Maps as it is compatible with Google Earth KML files) which it needs to know in order to interpret the world file coordinates etc. The files all need to be renamed to the same file name part but leaving the extensions correctly and they do need extensions because Qgis is multi platform so the extensions are required by the Windows versions (even though in theory they could read the magic numbers from the file headers the way Linux and MacOS do). So I use Thunar's bulk renamer to rename groups of files and I have installed it under KDE (it is part of XFCE but will work under KDE no problems).

Well it came together exactly as planned with perfectly aligned tiles so that is the technique I will be using to do a lot more maps whenever necessary if I need to make best use of higher quality scan imagery.

Graphics issues with map mosaics [2]

As I wrote in my last post there are problems when you have to scale down imagery significantly. So I am working on a new map mosaic experiment to see what I can do with it.

Starting from four tiles at 0.4 metre resolution i.e. 1 pixel measures 0.4 metres on one side, I am scaling each of these tiles to double in each direction so they have four times as many pixels in total.

The key will be the assumption that Qgis will be able to load these much larger tiles and display them as well as the originals, we just have to tweak the jgw file (world file) to say the pixel size is half of what it was previously and then they should draw in the right place

I have to recreate the mosaic from scratch i.e. I can't use my previous mosaic in any shape or form unless I scale up all the historic aerials as well and that will lose significant quality but as they are only background since I have these completely new ones that are the most significant that could be an option (starting from the old mosaic and arranging the new tiles around it) but I have decided to start from scratch and build a completely new one with only Retrolens coverage.

So far the results look really good in Gimp and I am banking on the larger tiles looking really good in Qgis because this has been my experience with Qgis and map mosaics to date, basing on experience of using Christchurch City 0.075 metre LDS contemporary coverage and the higher resolution Christchurch retrolens stuff as Qgis handles this well.


Graphics issues with map mosaics [1]

Well this is a tricky little number for people like me who are doing things with high resolution images. I am frequently combining high resolution scanned Retrolens stuff with other aerial images and running into some scaling issues. Basically if I have got a Retrolens image at a certain resolution and I have the aerial image from LDS at quite a different resolution then scaling the Retrolens image can have quite tricky outcomes if the scaling is significant.

I have of today been working with a high resolution image of a particular area and it's been scaled down to something like 15% of the original dimensions in each direction (resulting reduction in pixels is 2.2% of the original) and this results in losing a lot of resolution. Well the truth is there is not a lot that can be done about this because regardless of how many pixels there were in the original, the size of the pixels in the final image is preset by the original aerial image that I am overlaying onto, and therefore there is a considerable loss of detail. This resampling is hard to understand because the detail on the screen preview is so sharp from just scaling the original pixels to a smaller size and then we can see after doing the transformation that so much detail has been lost in the scale down.

To be able to preserve all of the original detail I would have to have the original layer much larger with a pixel size set in the world file for the Jpeg image as this is how a GIS works out what area to display a map tile over - it looks up the world file and that tells it where the upper left corner is and also the size of each pixel and it scales the tile to cover a certain amount of space on the canvas. 

I am going to experiment with that as it does seem desirable to be able to scale the maps up to see all of that detail on the ground at certain levels so this means greatly scaling up the original image and then not having to scale down the retro image so much. Up until now it has been the reverse with scaling most retro images up in size, which is understandably a lot easier for the software to deal with and results in much less quality loss. However there is a lot more work in that and in part it entails redrawing the whole mosaic from scratch which is not something that is easy or straightforward but it looks like that is the way to go but it will be experimental and not a high priority to carry out.

Saturday 13 October 2018

Limits.conf settings to allow a lot of file handles


One of the things I always have to remember to do for Qgis is for each time I install a computer or VM is to change the default settings in /etc/security/limits.conf to allow a larger number of file handles. Otherwise Qgis can't open a lot of layers at once.

Open this file and put the following at the end

*  hard  nofile  10000
*  soft   nofile   10000

Then after doing this you need to reboot the system. This fixes the crashes.

Thursday 11 October 2018

Setting a custom content width on Blogger's Simple template

Most of my Blogger blogs use the Simple template; I haven't got much time or inclination to scroll through masses of templates and experiment.

You can use the theme designer or you can directly edit the HTML to change some settings.

On some of my blogs I have directly edited the HTML to set a custom width of 1900 pixels so as to use the full width of my 1920x1080 screen. I prefer this width because I like to make my images 1600 pixels wide for display on the blog. 

In the Simple blog template HTML edit this section from line 583-586:

    <b:template-skin>
      <b:variable default='960px' name='content.width' type='length' value='1900px'/>
      <b:variable default='0' name='main.column.left.width' type='length' value='330px'/>
      <b:variable default='310px' name='main.column.right.width' type='length' value='0px'/>

The line you are changing is name='content.width' and as you can see I have put in 1900px which I can only set to somewhat less if I use the theme designer interface.

NZ Rail Maps is the first blog to use this wide layout. Some of the other blogs will be adjusted as necessary.

Wednesday 10 October 2018

Apps, apps, apps

I don't write much about mobile so it's time to fill in this gap by blogging about some of the apps I have on my various phones. The main phone I have is a Nexus 5X, and I have a second phone, a Galaxy J2, which has a tablet share SIM (off the Nexus' account) and is capable of making calls, but not receiving them (although it can receive texts). I also have a Galaxy Tab A 8" tablet which also has a tablet share SIM and also can make calls but not receive. 

Apart from those devices which all get a reasonable amount of use and can connect to the cellular network I have two more older phones that are Wifi only and have been used as media players. The Moto E that was my first Android phone and cost $99 new isn't really flavour of the month as a media player now because it has trouble running apps reliably. Not sure if this is old version of Android, CPU/RAM or lack of storage (it only has 4 GB). So that leaves me with my Lumia 635 running Windows 10 Phone (it had WP8 when I got it) which has become the default media player. I always have a separate (low value) device as the media player so you can leave it playing music wherever and still use your phone, and not running down the phone's battery as these smartphones tend to go down too quickly for my liking, it seems that cellular services do drain the battery faster.

This is about some of the useful apps I use every day on the Nexus and also on the Lumia because those two get the most use everyday. In the below listings, everything is Android unless it says otherwise.

  • Media Players:  The best media players are the ones that can browse folders on your device. If you are like me and have converted your video clip collection into MP3s for your phone, the tags that most media players need to sort them by artists, albums and etc aren't automatically put into an MP3 so players like the default ones (Google Play Music and Groove) can't pick them up. And of course videos don't have tags like MP3s. Here are some great media players:
    • Pulsar music player - easily browse folders, no ads.
    • PlayerXtreme video player - folder view (not exactly browsing all folders but just lists those that have videos in them).
    • Still trying to find something as good as Pulsar for W10P. Have looked at so far:
      • Loco Delight - corny name, a bit hard to navigate, stable and plays well.
      • Next-Player - a little tricky to navigate. Plays well when it isn't freezing the phone. I am looking at whether it conflicts with another app to see if I can work out where the problem is.
    • One thing to be aware of is that like videos, you need to put a number at the beginning of the name to ensure playback in a certain order when playing from folders e.g. 01, 02 etc. At the moment most of my music collection isn't converted from videos so they don't have these numbers, but as soon as I get around to changing over to converted clips and number-naming everything else, things will be better.
  • Do Not Disturb: The default DND app in the Nexus (built in Google/Android app) is very annoying because of bugs, the biggest of which is if you go in to check what is turned on (e.g. a rule that may have turned on), just looking at the settings overrides any rule currently in place, so you can't go back to the rule being the higher priority. That's a little hard to explain, but that simple fact means there are many times I have accidentally turned DND on permanently without realising it and only noticed some time (days later often) that it was still on.
    • Auto Do Not Disturb is a great replacement for the Nexus default DND app. The profiles give you an option to absolutely be able to make sure the DND is turned off at a certain time every day. I use an overnight profile which turns off alert sounds but leaves me able to receive phone calls and alarms. So no more annoying texts and app alerts in the middle of the night, but DND is forced off every morning. Apart from creating your own profiles for custom days and time slots, there is also a preset coundown  profile you can just click straight onto that will turn on DND for a preset amount of time. Premium version is $5.49 and well worth it.
  • Mapping and GPS: A simple GPS logger that works a bit like my old cheap eTrex for logging a journey you are taking is a good app to have on your phone.
    • GPS Logger by BasicAirData is a FOSS app that does a great job of logging and tracking a journey, with real time display of data including altitude, coordinates and speed. The only thing I found difficult was exporting the data because its data folder isn't accessible on my PC because of the way data is stored on my phone's built in memory. Uploading to Google Drive fixed that.
  • Realtime Metro Timetabling: 
    • Christchurch Transit is the latest app that can display RTTI information for bus services in Christchurch. I found the display cluttered and a little hard to use, and the older Chch Metro app is still my preference for its simplicity, although it is no longer in the Google Play store.
  • Scanning: These apps can let you scan a photo or document from your phone. A killer feature some have is the ability to automatically correct geometric alignment issues.
    • Office Lens is a neat little app from Microsoft that can attempt to correct the shape of photos taken from your phone that are not perfectly aligned. It works quite well a lot of the time, but in practice, I found it easier to turn off the correction for certain tasks (like copying stuff from bound files in Archives New Zealand, where it was hard to delineate the edges of an original). However, it was good for copying family photos, except that if the lighting is insufficient, the gainup adds a lot of noise or resolution reduction.
  • Data Sync: these apps can be used to synchronise data between a phone and PC for example.
    • KDE Connect is an app that can be used to transfer data between a phone and a PC that is running KDE. I haven't used it much, and one consideration is that like Bluetooth it can send maybe only one file at a time instead of a whole folder of files. A great substitute for Bluetooth with desktop computers which mostly lack the hardware for BT.
  • Camera apps can do more than the built in app of your phone in many cases
    • Camera FV-5 has a good reputation and I bought the Premium version of it on the Nexus. However it can have a bit of a learning curve, and has this annoying feature of rotating the display to landscape mode after taking a picture in portrait mode. I do use it when I want to do swish stuff, but for everyday simple things the built in app still gets a lot of use.
So there we have it, a pile of different apps that I find useful, and hopefully you will as well.

Monday 1 October 2018

HOWTO: Enable root login in KDE / Debian

UPDATE: Since I discovered Ctrl-Alt-F1 and found I could root login there and have no problems, I have not bothered modifying any KDE computer to enable the root login.

Last time I wrote that I needed to try a new user profile on serverpc because of issues resuming from hibernation. This has proved to be more involved than I expected because sddm automagically disables root login, and you can't login as yourself, go into a root shell and delete your account, because processes have locked your user login, preventing it from being modified.

After a lot of investigation I found the following instructions which enable root to appear on the KDE logon screen:


  • Basically, you need to login to a root shell and edit /etc/sddm.conf file. This file may not yet exist on your system, in which case you must create it. You then need to add or change these sections:

[Autologin]
Relogin=false 


[Users]
HideShells=/usr/sbin/nologin,/bin/false,/bin/sync
MaximumUid=65000
MinimumUid=0


It's possible the [Autologin] section may not be needed. The [Users] section is needed in full. The first line hides every system user except root because there are a whole pile of other system users you don't need to appear on the graphical login. The other two lines set the range of user accounts that can appear. Normally the default for MinimumUid is 1000 which of course is the first general user account in the system. Change that to 0 and it will take in root (and other system users, which get blocked by HideShells).

  • Also in your root shell run passwd root to ensure a password has been created for root.

  • You also need to edit /etc/pam.d/sddm and make the following changes:
Comment out the following line by putting a # character at the beginning of it

auth    required        pam_succeed_if.so user != root nopasswdlogin
 
If the following line is commented, uncomment it (remove any # character in front of it)
 
auth    sufficient      pam_succeed_if.so user ingroup nopasswdlogin 

Reboot the system and it should come up with Root enabled on the login screen (it will be highlighted in red instead of black).

  • In addition the following step has been found to be necessary:
Comment out a line in /etc/pam.d/sddm that reads as follows
auth      required     pam_succeed_if.so user != root quiet_success

Wednesday 26 September 2018

Fun with Buster / KDE over XFCE on Stretch

It's one thing to be able to install an alpha edition of Debian and it's another for it to be usable. But that's part of the whole process for an alpha edition of something.

But because running a Debian alpha is so much of a challenge, I can't do it with a rotated screen like I am attempting to do with my pc4, because I can't guarantee it will be possible to rotate the display with the video configuration of that computer. Therefore the screen has to be in regular landscape orientation and that means I have to find somewhere to have the display that it will fit within the footprint of my desk with the other screens, because at the moment it is vertical to save space.

The other issues with Alpha3 are also significant, and the ones that I have noticed the most so far are in the install process, whereby attempting to install KDE or LXQt as the desktop environment result in a system that will not boot up to that environment. Instead, you get a shell login screen, and you have to type in startx to bring up the desktop environment. XFCE is fine, so my next move will be to try putting KDE over the top of XFCE.

Trying that on serverpc running stretch has turned out to be an issue because, as was my previous experience just putting the breeze logon screen and sddm as the logon manager onto my Debian/XFCE computers (going into XFCE after logon), hibernation has become flaky. Since hibernation is reliable on mainpc running a clean install of KDE on stretch, serverpc's reinstallation from scratch with Debian/KDE is the obvious next move, and fortunately it isn't as time consuming as reinstalling mainpc because serverpc doesn't run a lot of stuff. On the Buster computer this issue won't exist because it won't be expected to hibernate anyhow.

After installing Buster again on pc4, with XFCE as the default desktop environment, I then attempted to install LXQT over the top of it. However, after configuring sddm (the default window manager for LXQt and KDE) as the display manager, the desktop failed to start and I was left with the command shell logon. This is basically a repeat of the issue when installing either KDE or LXQt as the windowing environment during the install. If you type startx then it will start up, apparently in lightdm, which indicates there is some sort of issue with sddm.

At the moment that is what I am left with, but then that is one reason it is an alpha, so I imagine there is probably already work happening to resolve these issues.

serverpc is still having issues after resuming from hibernation, the issues are that one screen fails to load the background and both screens fail to load the panels. So I am going to try setting up a new user profile, fortunately these panels don't have a lot of customisation. bedroompc has had x11vnc put onto it to replace Krfb because the latter can't handle a two homed computer (one with two network interfaces, the wireless and cable) and it picked the wrong interface to bind to so I couldn't remote to it with Remmina. So I have installed the latest version of x11vnc from  debian testing so it will be reliable and I can remote to that to write this blog because I am having problems with my internet again so this is written on a tethered connection on my phone. The cable is set up for internal network use only so that I can run VNC over it, and the internet connection for that computer is made using its internal wireless adapter, either to my wireless access point in the house, or to a tethered phone. So it will be a lot easier to use bedroompc for this than pc4 because I don't need to keep switching one of the KVMs over to pc4.

Thursday 20 September 2018

Life with KDE [5]: Install on Debian over XFCE

So I haven't written too much on this blog for a couple of weeks. Now I am writing about KDE because I have decided which computers will have it and which won't. serverpc gets it installed because it is fast and has 16 GB of memory, and mediapc will be the one that doesn't because it only has 4 GB, this is the one that will stay on XFCE for now and at least have working MTP which KDE doesn't do very well at the moment.

So for serverpc instead of installing Debian 9.5 from scratch (a big deal having to reinstall at the best of times, though with my systems configuration with a separate system drive, nowhere near as big a deal as reinstalling Windows) I decided just to install kde-full package on top of XFCE, using Synaptic to find the package and it's quite a big download, nearly 900 packages of dependencies in all, something like 2.5 GB.

At the same time I am changing screens around yet again because the most optimum use of screens is a key issue and serverpc really needs two screens to make the best use of it, one portrait and one landscape. And with another screen added on for pc4 and pc4's screen getting added to mediapc it can all work, because mediapc doesn't need both of the screens it currently has so one of those can go to serverpc with minimal work, just changing a few cables around.

Anyway back to that KDE overinstall. Synaptic had some problem with a missing package so I just did it in a terminal with apt install kde-full and away it went. You will get a popup screen asking which display manager you want to use, lightdm or sddm. This is because XFCE on Debian by default uses lightdm, and KDE uses sddm. So of course I chose sddm. Now the only real issue is the command that I use to rotate the logon screen is in the lightdm config file and doesn't apply to sddm. However as you only have to type in the password it is only a small thing. 

There are a few extra things that are worth installing: eog and viewnior as image viewers (in this case already installed by me under XFCE), and pcmanfm-qt and thunar as additional file managers. The latter two are improvements for the following reasons: pcmanfm-qt (from LXQt) doesn't have the annoying Dolphin/Thunar behaviour of trying to reconnect network shares that may not be connected (which also affects KDialog). Thunar (which does at least have the option of turning off network drive management, unlike Dolphin) has the bulk renamer built into it, which I have used a lot. But Thunar is already there anyway, being part of the XFCE install. I also like the PopupLauncher widget which I use as a favourites menu instead of the KDE one; partly because repositioning the KDE launcher to the right corner of the taskbar causes it to default to the Leave section of the menu when opened, rather than Favourites. 

What you do have to be aware of though is that you now have more than one session and by default the XFCE one will load, which puts you back into XFCE. This is how you can run more than one desktop environment at a time. It's a little bit weird to see some KDE widgets running under XFCE. If you want to stay in KDE then either you change the default Xsession (the first one in the list) or else remove all of them other than plasma (which is the one for KDE). In previous times I have simply renamed all the others except the one I wanted. This time I chose to change the default xsession away from XFCE, and leave all the files as they were. The command to change the default Xsession in Debian is as follows:
update-alternatives --config x-session-manager
When you run this it gives you a list of, in my case, four options, which were:
0 - the default
1 - startkde, the command to start KDE
2 - startxfce4, the command to start XFCE4, which was the current default
3 - xfce4-session - I have no idea how this is different from startxfce4

There are a few other situations that create more entries. Notably, if you install Kodi, it installs its own session. And of course, if you have more than two window managers, there will be even more. The session files are actually stored in /usr/share/xsessions directory and are .desktop files like the ones that create application launcher / menu entries. So after rebooting, KDE was the default.

mediapc had some issues, among them non working MTP so I reinstalled it with Debian 9.5 expecting to see my experience of other computers when MTP automatically came up after installation. This didn't in fact occur this time and I had to complete the MTP setup manually as shown on this page from the Debian wiki. Basically you just install mtp-tools and jmtpfs, reboot and it works in Thunar as expected. On an Android device there may be a device menu you have to go through to choose the mode; my Nexus defaults to charging and you have to go into Settings -> Connected Devices -> USB (Android 8.1). On the other two devices I tested (Moto and Galaxy J2), this setting cannot be accessed directly from the menu; instead you have to watch for a notification that you are connected via USB and you can change the setting through that. It is not necessary to enable developer mode. As it happens on both of these it defaults to MTP making a setting change unnecessary, unlike the Nexus.

From experience so far with KDE a number of the widgets aren't optimised to save space, and so on my two screen computers, I have three panels: the two along the bottom for tasks and launchers, and a third one on top of the secondary screen for the other stuff, like the disk free space, system load, clock, system tray and notifications etc. I tried having this on the side of a portrait screen but the widgets in some cases won't work on a side panel. So then I have ended up with it on the top of the screen where unforunately it has to be a double height to work with some widgets, but I can get by with that on the secondary display.

Rearranging the screens has now been further extended by adding another screen over the desk for pc4 so it can be used easily again. Otherwise there would be little point in having pc4 at all. To achieve this win10pc has gone down to a single screen in landscape mode (for compatibility with Remmina when remoting to it, which is the way I use it 99% of the time).

Saturday 1 September 2018

HOWTO: Change application font sizes in XFCE

Since I am going to have to keep at least one of my computers running XFCE (it doesn't need remote access) because MTP works better on Debian 9.5/XFCE for now, I needed to look at how to set menu and application font sizes. There is actually not a GUI to do this. There was not an issue with this on serverpc up till now, but changing the display around (rotating) has for some reason shrunk the font sizes on the Whiskers menu and in Gimp which is really annoying. Qgis also has an issue on this computer with tiny check boxes next to layers, which may be affected by similar settings.

It's a bit hard to understand but in the Settings menu if you go into Appearance, then into Fonts, the answer is to set the Custom DPI setting. The default is 96 I think, but in my case I screwed this up to 150, with the default font set to Carlito 9. It's important to note that the default font setting only affects the panel and title bars and other miscellaneous stuff i.e. it does not impact on the menus and other bits of application UIs. In this case adjusting the setting way higher than it already was fixed the issue.

This all seems a bit strange as the small menu and app text only started today and I have never had to adjust this setting before now but that's how it works.

Friday 31 August 2018

Life with KDE [4]

There have been a few small hiccups in a couple of areas with KDE so far.

One is the MTP support appears to be not as good as on Thunar (XFCE). I have not been able to get a reliable connection to my Samsung Galaxy J2 phone. As a result I have put a cable onto the Windows 10 computer to enable me to use that to transfer files off the Galaxy J2.

On the other hand the KDE Connect setup seems to work very well and be a viable alternative to Bluetooth for transferring individual files, but it only seems to be able to handle one file at a time.

KDialog is another thing that could work better. Chrome needs this dialog in order to open files, but it is too much like Thunar in not being able to deal with network shares that are offline. I am looking at using another file manager other than Dophin as there are a few alternatives like PCManFM-Qt and possibly Nautilus or Nemo that don't freeze because they can't connect to a network share. Firefox Aurora browser doesn't need to use KDialog so I may end up switching browsers for day to day stuff.

Generally the experience is OK except that resuming from hibernation can be more likely to freeze up.

Thursday 30 August 2018

Life with KDE [3]

Since writing about the NUC's problems installing Debian and Kubuntu in UEFI mode, I discovered the computer has a legacy boot setting which is essentially BIOS boot so with that turned on and UEFI turned on there was no installation problem. Kubuntu was installed successfully first, then I overwrote the installation with Debian-KDE and then put the kde-full package on to complete the installation.

So I now have two computers running KDE. The others will follow in due course but not urgently, just whenever they are next reinstalled. Once again as previously described the settings were put into X11 to handle the screen's 1360x768 resolution and it has worked perfectly. Since I have two screens of this type on mediapc the issue is with the detection of the screen as 1920x1080 by default and 1360x768 not being one of the default resolutions provided in Xorg as implemented in many distros.

I can't really be bothered working on the obvious efibootmgr problem as it will probably take weeks due to the voluntary nature of much of the maintenance of much of these packages. It would be OK if I had a spare system and apparently this issue is not unknown. But right now I can just use the BIOS mode because I am never going to need UEFI on this computer. I don't use it now on the only other computer in the house that has its capability.

Monday 27 August 2018

Life with KDE [2]

Things are getting back to normal with KDE on mainpc and I am getting on with work today.

I was planning to have the same configuration on bedroompc (the NUC) but have run into problems with the Linux implementation of EFI when this platform is set to boot UEFI and install Debian or Ubuntu in that mode. So far with numerous tests and tweaks we seem to have hit some kind of internal limitation in the efibootmgr module for the number of times a Linux operating system can be installed on the platform (in this case, the NUC6CAYH PC with the NUC6CAYB board installed). 

Since this is a fairly new NUC it is still under warranty, but since Intel doesn't really support Linux and as I have flashed it to the latest BIOS, AND it can install and run Windows 10 without any issue, it seems it falls down onto the Debian or Ubuntu maintainers to take a look at what is going on with efibootmgr which is unable to complete the setup task of installing and configuring grub-efi-amd64. Enough of grub does get installed for the hard disk to be recognised as bootable in the NUC's EFI bios and grub to start up to a command prompt, it seems the issue is the part of grub where it should be booting into the operating system is at fault.

Since I have tested the boot with both Debian and Kubuntu it must fall on the boot manager code and the fact of this apparent perception of a limitation to the number of installs. This computer has been successfully running other Linuxes since new, mostly Xubuntu and Debian, without problems. In fact I only recently installed the same version 9.5 of Debian with XFCE on it a few weeks ago. Given that the BIOS has been wiped and upgraded, that grub can boot to a shell, and that Win10 can run OK, it looks like I now have a process ahead of me of working with the Linux community to try and debug the problem in their EFI boot implementation.

Sunday 26 August 2018

Life with KDE [1]

Having jumped in boots and all into KDE on mainpc (having had to reinstall Debian and choosing KDE instead of XFCE) there are a few things to be aware of. At least we do get 5.8 version of KDE Plasma on Stretch, I would have half expected it to be 4.x because of Debian's reppo for older versions of stuff but having a 5.x release is good, of course I would want 5.13 but maybe Buster will have something like it when it comes out next year.

The standard KDE install task from Debian is only a minimal desktop that lacks some useful stuff and it also defaults to an ugly looking logon screen with Debian wallpapers. On my test config I installed extra packages for the breeze login theme. I had to install synaptic package manager to find them as the KDE software centre could not list them.

To make life easy you can install kde-full package which just puts all the KDE stuff in because Debian doesn't do this at install time. This will overcome numerous frustrations and trying to figure out why something doesn't work like it should (like the printer not being detected when you plug it in). Probably this situation is no different from other desktops like XFCE not being completely installed, the difference is there is so much more in KDE that you are going to be missing out on a lot probably, there something like 500 packages left out. OK you may not need them all but if you have a powerful system and are not too worried about bloat then it is probably easier just to install the whole lot at once.

After getting these extra packages installed the printer was straightforward to get going and print from. I  have spent much of the evening tweaking various KDE things and really getting up to speed on it which has been slow but tomorrow I will get some real work done as I have really lost a day but unfortunately it has been that kind of week of getting very little done overall.

Saturday 25 August 2018

KDE remote desktop sharing

One of the best things about KDE is the amount of apps they have available that are integrated into the shell. As XFCE lacks its own remote access integration you have to install x11vnc and muck around trying to get it to run as a service which I have yet to be able to complete. In KDE you have several apps available, I chose to use Krfb (RFB is the protocol that VNC uses) and it comes up as "Desktop Sharing" which is essentially the correct description of what I want to achieve.

With just a few clicks I was able to enable the desktop sharing and also enable it as unattended because I want to be able to connect to this desktop all the time without having to go to it to authorise a connection.

It will be interesting to see how Plasma develops in its latest high efficiency optimisation. It is quite possible some of my computers will be switched to KDE instead of XFCE but there is no huge rush as I just want to spend some time figuring out how everything works with KDE. Most of my PCs are fast enough to have few issues with the somewhat greater resource use in KDE and I will be keeping Debian as the base OS, no more Ubuntu for now. 

The only real thing with the Debian install of KDE is the clunky default logon theme which you'll want to change to the Breeze logon, for which you have to install additional packages. I last had a play with the Breeze logon when I had LXQt on one or two of my computers. Incidentally LXQt is now an option to install in the Buster alpha and the next release of Lubuntu will use it by default.

One of the reasons KDE looks good is because they are making steady progress towards integrating Wayland, something XFCE is falling further and further behind on. Pretty soon Wayland will replace X11 as the default display protocol and then some of the Linux desktop environments will not support it very well. Interestingly enough Ubuntu 17.10 installs Wayland by default (Xubuntu doesn't, so I won't have seen Wayland on anything yet). In Ubuntu 18.04 they reverted to Xorg after discovering Wayland still has some issues.

Mainpc at the moment is getting reinstalled with Debian and KDE because some sort of stupid crap bug resulted in an apt autoremove command removing a lot of core install packages including all of LibreOffice and some essential stuff, basically stuffing the system and making it unable to connect to the internet. So Debian 9.5 will be on it along with KDE and this will be only the second time I have tried KDE on one of my major computers that I use every day. This time hopefully I can make it stick (last time it was too hard to migrate from XFCE).

Friday 24 August 2018

Computing resources optimisation [6]

Well computer no.4 is not going to be "ministrypc", it is going to be a computer for displaying pictures on with the maps project. I just decided this is the best way to do things, because its screen resolution of 1600x900 is higher than mediapc. But also, because I want to keep mediapc just for music playback and stuff and have a computer that does work just for the mapping specifically.

It had Buster running on it but I have had a few problems with this alpha release so last evening I rolled out Stretch 9.5 onto it because I couldn't get Synaptics Package Manager to run and some other updates wouldn't install. The other thing I am doing with it is testing other desktop environments because, although XFCE has been pretty good, I am just keeping my options open as their development seems to be slow. XFCE version 4 was released way way way back in 2003 and they don't seem to have any ambition to move to a version 5 any time soon. There is a 4.14 coming out soon but 15 years is an awfully long time to have software that hasn't had a major new release.

I am deciding to evaluate KDE on it even though this has a resource hog kind of reputation. The latest version of KDE Plasma (5.13) is being optimised to make it much leaner than previous releases, so it will be interesting to see how I get on with this.

For doing stuff that requires me to use the cellphone tethered, I already have another computer (bedroompc) that can do that and with x11vnc updates I can vnc to that computer quite easily and reliably when I need to do stuff like that and it doesn't require the separate wireless bridge because that computer has built in wireless.

Friday 17 August 2018

Fix for x11vnc problems on Stretch

Following up to my last article, having tested x11vnc for a day on mediapc, it crashed with the "stack smashing detected" issue. Reported as a Debian bug here:

This issue affects multiple distros, including ones unrelated to Debian, such as Red Hat, Arch and Fedora, as well as related such as Ubuntu.

The recommended solution for Debian Stretch is to install the x11vnc packages from the testing repository. I have now applied them to mediapc for further evaluation.

Hopefully this will make x11vnc reliable enough to be used for my purposes. This is about the first update to x11vnc in eight years.

Wednesday 15 August 2018

HOWTO: Share a screen in Linux

Screen sharing is a generic name for allowing the screen on one computer to be accessed from another. It is similar to, but different from, Windows Remote Desktop. There are a number of options that are mostly based on VNC. I have a Windows 10 Home computer here that has TightVNC Server installed on it, and with Remmina running on one of my Linux computers, the user session running on the Windows 10 computer can be controlled through Remmina, and at the same time as being displayed on the Win10PC's screen. This is different from RDP which locks the screen when you log in to the remote computer, although you are connecting to the same session.

However the scenario where I want to control the current session that is running on a Linux computer, is a bit more difficult because most VNC programs work differently under Linux than they do under Windows, including TightVNC for example. Under Linux, the current session under X-Windows (the basis for most GUIs in Linux, although Wayland is starting to be adopted as a replacement) can't be shared in most of the common VNC implementations. So these programs generally create a new remote session which the user can run stuff in, but they can't share the default user session on the computer. This means that many of the FOSS VNC packages won't perform this function and so you can't expect to install some VNC system and it will just work.

Exceptions to this include the following packages:
  • Vino is the screen sharing package for Gnome. It is provided with Ubuntu and variations. Standalone packages are also available for other distros and will work with XFCE, but it is more and more being integrated into Gnome these days which makes it difficult to use with other desktop environments. I have used Vino in the past with Xubuntu, but the impression I have now is it would be difficult to use in Debian/XFCE because the configuration is supposed to be done with a Gnome-only tool. There is a desktop sharing tool in Cinnamon which is probably Vino adapted to a non-Gnome environment. I haven't used Cinnamon since I used to have Linux Mint a couple of years ago, so I don't consider this to be an option.
  • X11vnc is a package developed formerly by Karl Runge. It is in the repositories of major distros such as Debian. It is easy to set up and install, but has not been actively developed for several years, and my impression of it is that it is not very reliable, tending to crash a lot. I remember a year ago using it with Xubuntu and it seemed to have better performance than Vino. (I will take another look at X11vnc to see if I just need to change some configuration settings, but I was unable to make it run as a service on my computers)
  • RealVNC is unlike most of the other VNCs in that by default it shares the current Linux user session. However it is commercial software although a limited free license is available. I haven't bothered to install it because of this licensing issue.
  • Xpra is a package that among other things is supposed to be able to "shadow" the current user session on a computer. I tested it and it seems to be difficult to configure and make work successfully. The documentation for it is tricky to understand. It looks like it has some potential but is either very complex in operation, or needs a lot of work to make it as user friendly as some of the other packages.
  • X2Go is another open source community supported package. I am currently testing this on mediapc to be able to control it from mainpc. There is a specific package with it called X2godesktopsharing that is specifically used for the screen sharing functionality. Unfortunately X2Go turns out, like Xpra, to be another poorly written and supported package.
  • NX/NoMachine is the last option to consider. X2Go is based on it. NX uses its own protocols and NoMachine provides a limited free version of a commercial package. There used to be an open source part called FreeNX but it is well out of development.
So I have been looking into a lot of options this week. There is in theory another option which is the Linux RDP clone which I think is called rdesktop. I haven't looked into that one.

Right now I only have NoMachine left to test having gone down the list, or I could still consider RealVNC if I run out of options.