Wednesday 28 November 2018

Linux video driver issues

One of the challenges with Linux is that bug fixes can be slow to be developed and there can be issues when libraries are updated in a component of a distro in a different time cycle from other parts and therefore break software. For example, in Qgis there is a version of it that depends on a library package called Ubuntugis which is a specific branch of the Ubuntu community. Problems occur from time to time when Ubuntugis packages are updated on a different cycle from ubuntu itself or Qgis.

I have had NVidia graphics cards in several computers for a number of years. When I was still running Windows I had a quad head NVidia card in mainpc and although NVidia does produce their own drivers for Linux, there were issues with these that led me to take this card out and instead have a 2 head Gigabyte NVidia card costing $50 using the open source Nouveaux driver instead. However this seems to be difficult to keep updated and cause problems with other software, notably Google Earth. The latest issue was when an update to some of the xorg components (the basis of the X11 graphics package) had problems with the Nouveaux driver preventing a screen from being rotated.

Fortunately Intel graphics are well supported with Intel themselves being willing to develop open source drivers for Linux and as a result I was able to remove the NVidia card from mainpc and just rely on the onboard graphics which also of course lets me use the later versions of Google Earth.

Thursday 22 November 2018

Computing Resources Optimisation [2C]

Still looking at this, really just fine tuning expectations / specs. Last time I talked about task separation. Running stuff on mediapc is still a test case really. What has turned out to be important however is limitations on mediapc caused by it having an old CPU that is slow to decode some video formats. This led to problems recently with video playback from the Vimeo site - there was an issue with bandwidth limits on my internet feed, but when the limits were eased I still had problems with video dropout, and since the NUC had no problems at all, I determined it was most likely the hardware spec of this computer, which is the oldest full size computer out of the three I use daily. Although there are also two mini-ITX computers that would have even more issues. 

However the present mediapc is perfectly OK when playing back most of the formats that I have saved videos into on my computer - generally MP4 or MKV. For this reason it will be OK as a playback only computer slotting into the "pc4" role. It might get called "playerpc" or something new.

If this optimisation happens this is the summary of how the computers will change:
  • mainpc stays as is now
  • mediapc's chassis has the board removed and put into pc4
  • A new board, CPU and RAM are added to mediapc's chassis and it gets called "serverpc" or maybe "secondpc" or "graphicspc".
  • The present "serverpc" becomes the new "mediapc" with the only change being the disks taken out and put into secondpc, and then it gets the disks from the old mediapc. This computer being a Skylake has a good enough spec to be able to be considerably superior to the present mediapc which is only an Ivy Bridge. With 16 GB of RAM it can be used for a lot of things. It will continue to be used for backing everything up.
  • The compact chassis gets the old mediapc board but keeps its existing disks. 
Anyway of course we remain to see if these plans progress in any way.

The original proposal for the new system architecture gravitated at the time to a Coffee Lake system. However the Coffee Lake low end CPUs are in very short supply worldwide (Celeron and Pentium G models). I have not bought a Celeron for many years as the slightly lower cost has in the past not been worth the hit in performance, although the gap has closed in more recent times. The Pentium G which would be a Core i1 if there was such a thing, has become even more powerful, now more or less what a Core i3 used to be. This is absolutely typical of Intel: they keep shifting the goalposts; often times they will be trying to trick people into buying a CPU just on name like an i3 or an i5 or an i7 and then these people will discover that their especially cheap Core i series is a crippled version because there were some models produced that had a lowered spec to aggressvely compete. This time fortunately the goalposts have shifted the other way and an i3 is now a higher spec that originally was the case and a Pentium G is now 2 cores 4 threads instead of 2/2 as all the other Pentium Gs that I have are (which is all three of my computers that have a microATX board in them).

So because of the extra costs for a Coffee Lake system and now the i3 CPU which was the lowest spec I could get being out of stock at the moment, I took another look and came up with a Kaby Lake system with the Gigabyte B250M-D3H board because the Kaby Lake Pentium G is still readily available, as I suspect it will be until Coffee Lake Pentium Gs become more abundantly available. Although the RAM for this option is slightly dearer the saving in the CPU more than cancels this out resulting in about $80 less cost overall.

However as the Ivy Bridge system is slated to become a media player and actually has trouble playing back some formats, there is another way to use that $80 and that is by rejigging things a bit. If the new system only gets 16 GB of RAM and reuses the 16 GB from the Skylake system, then replacing the Ivy Bridge outright with a new Kaby Lake system as well as the new Kaby Lake system mentioned at the start of this sentence and giving one of the Kaby Lakes and the Skylake 4 GB each would sneak in for a similar amount as the original spec. Then the Ivy Bridge board and CPU could be sold or kept in reserve.

Wednesday 14 November 2018

Scanning with Linux

As we all know, I have gradually moved lots of things I do from Windows to Linux over time, but still have a computer running the Home edition of Windows 10. One of the few things that still runs on Windows 10 is my Epson Perfection V200 Photo scanner. Well it did work well on Windows 10 until the recent update, and now as it turns out, lots of Epson scanner owners are finding they are having scanning software problems on Windows 10.

Although I was able to get my scanning done eventually with a lot of mucking around, I have decided not unnaturally that maybe it is time to take a look at scanning on Linux. The last time I looked at this was over a year ago, and at the time, the options didn't look very attractive.

This time around I decided I would take a look at a shareware package called VueScan, which is cross platform, being produced for Windows, macOS and Linux. It was easy to install the basic package on Linux. It has then told me I needed a driver, and provided a link to the Epson website. Much to my surprise I found a deb package available to be downloaded for Debian/Ubuntu 64-bit.

I then found that SANE needed to be installed so I looked up the Debian scanning page and ran the tests on it, which mostly seemed to work (scanimage -L didn't seem to work however). I then ran the install.sh script to install the Epson .deb package, which seemed to install OK. 

After turning the scanner on and off again, I found that ImageScan (the Epson software package) and VueScan both came up and detected the scanner OK. 

I have tested so far with ImageScan but not yet VueScan. Both seem to be able to acknowledge the scanner's film and print scanning capabilities, but so far I have not tested this out. A basic document scan with ImageScan worked well.

This should work out a lot more conveniently but I do remain concerned about being dependent on Epson producing and updating its drivers for this scanner in the future, because I imagine eventually they will stop supporting it. The joys of owning hardware. At least with the printer the HP PCL print language is so widely supported with even very ancient printers able to continue to be used forever.

Wednesday 7 November 2018

Computing resources optimisation [2B]

Last time I wrote a post in this series it was looking at upgrading some of the hardware that I have. This time it is looking at task separation, or in another way, using mediapc to do some stuff like I have looked at previously.

This keeps coming up because I have four computers and one of them hardly gets used. Granted there is a case for using PC4 to evaluate new editions of Debian but I don't have anything to run on it. If I make it the media playback computer then mediapc is redundant.

So the idea of using mediapc to do some of the day to day stuff such as email, perhaps with just a single screen, is gaining some traction at the moment. If I follow through on this idea then pc4 will get the second screen currently on mediapc for playback purposes and will be set up to play stuff. However I expect the master media library will stay on mediapc with its RAID array for now. We will just sync as we do with the other computers that play media.

mediapc will have to be set up to hibernate and may need a reinstall of debian because with sddm running on top the hibernation is not very reliable (it works well with lightdm).

The result is that mainpc would be set up just for maps alongside serverpc. There is quite an advantage for mediapc in that it would have one screen dedicated to stuff like email (there is no new mail widget in KDE at the moment that can tell me when a message has come in) and can be left with that up all the time so I can just glance at it. At the moment I can only monitor new messages with my phone.


The tricky part is I can't use a third keyboard and mouse easily in the current layout (although mediapc's keyboard is accessible, it isn't ideally placed for typing) and so it may end up that mediapc would be remoted with Remmina or else I just press a button when I need to switch the main keyboard onto it.

Some testing is going to start fairly soon before I have too many more crashes on mainpc but it is tricky because all the resources are on mainpc and I can't see myself moving files across to mediapc so there will be a disadvantage for it in terms of accessing stuff over a network to some degree.

Monday 5 November 2018

Qgis functions for labels

One of the great things about Qgis is being able to use expressions in a range of things. This is just a little bit about using a function to combine various fields together to make a caption on a label of a feature.

In NZ Rail Maps I have features called Locations which are a range of things - stations and mileposts are the main ones.  These can have a name, and up to two distances attached to them. There are varying requirements depending on the type of location and the label caption is made up of seven fields combined over three lines. This is the function that is used to create that label.

if(coalesce("caption" ,'') = '', '', "caption") ||
if (coalesce("Distance",-1) = -1,'', if(coalesce("caption" ,'') = '','','\n' )|| trim(concat("Distance",' ', "Unit", ' ', "Label"))) ||
if(coalesce("Distance 2",-1) = -1,'','\n' || trim(concat("Distance 2",' ', "Unit 2", ' ',"Label 2")))

Basically it consists of the output of three conditional evaluations (if() statements) concatenated together. Each of these if statements is responsible for one line of the label.

I'll break it down line by line

if(coalesce("caption" ,'') = '', '', "caption") || 

So that first line is pretty simple. It checks to see if the caption has anything in it. The coalesce() function is used to overcome null propagation, in which if the caption was a null, that null would propagate through the entire function and result in a only null being returned regardless of the rest of the function. So the "caption" field is output if it is non null. The || on the end simply adds the result of the next if statement to the label caption.

if (coalesce("Distance",-1) = -1,'', if(coalesce("caption" ,'') = '','','\n' )|| trim(concat("Distance",' ', "Unit", ' ', "Label"))) || 

So the first thing we do for the second line is to see if its first field, Distance, is empty (null), again using coalesce(). This time, because it is a numeric field, we use a number for the comparison, instead of a string. If there is a number in there, we use a second if() statement which looks like the first line evaluation of the "caption" field and is intended to make a decision about whether to put a newline character into the label. This ensures that the first line of the label is not a blank line if there is nothing in "caption". Because while a station will have a caption (its name), a milepost will not be named, instead it will start with a distance. The rest of the statement just adds on the "Unit" and "Label" fields to the second line, and the last thing is the double pipe to concatenate the third line on.

if(coalesce("Distance 2",-1) = -1,'','\n' || trim(concat("Distance 2",' ', "Unit 2", ' ',"Label 2")))

The last line is essentially a repeat of the second line, using different fields which are duplications of the ones used to make up the second line. It is simpler because it assumes that there is always a line above it that has already been filled by the previous code and therefore it doesn't need to check before adding the newline character to move its output down onto the third line of the label.
It took me a lot of work to get this function and after years of using a simpler function that would put out extra blank lines unnecessarily I just wrote the new function last night. Here is the old function:
if(coalesce("caption" ,'') = '', '', "caption" || '\n') || trim(concat("Distance",' ', "Unit", ' ', "Label", ' ',"Distance 2",' ', "Unit 2", ' ',"Label 2", ' '))

That one only uses two lines and the second line will always be present, but will be blank if the fields are empty. You can have stations for which you don't know a distance, so that second line is optional. Hence in the new function we have code that checks field values to see if the second and third lines are needed, and if they aren't, nothing is output, rather than a blank line.
The reason why I haven't had this function up to now is because I hadn't realised it would be possible to concatenate multiple if() statements together. If everything had to fit in one if() statement that statement would be extremely complex. I know from writing functions in spreadsheets that you can do a lot of complex stuff with if() statements already but having multiple nested statements makes for a very complex, hard to understand and debug piece of code, and because of that, since I formally studied programming at CPIT more than 20 years ago (which built on my long experience from high school), I discovered how to avoid complex nested conditional statements and simply break all those conditional evaluations up into multiple sequential statements.

So the new label function took a couple of hours to test and debug and get all the bugs out. There are a couple of things that make it easier. One is to code the function in a separate text editor, rather than directly in the Qgis evaluation editor. The other is to test one piece at a time in the editor because its syntax evaluator isn't very helpful and doesn't highlight exactly where a problem is. So for this function, each of those three statements was coded in a LibreOffice Writer document, and then pasted one at a time into the Qgis editor to test it.