Tuesday 21 July 2020

General update week 30, 2020

I wrote about a lot of fun getting a system to work with static IP addressing (it is running Debian 10 with LXQt). Since it was able to be made to statically configure its IP with no issues, I configured the other Debian10-LXQt system with a static configuration also. The second computer only has one network adapter, but the benefit is that since the network router and wireless broadband router are only switched on when needed, I do not need to wait for the network router to come online each time and be ready to supply DHCP addresses to the client computers when they start up. Everything is working well now.

At the present time the backups are all being done locally on each computer as previously referenced using rsync since I have yet to do further work to make an rsync daemon function as planned for trial for backups. Another option I could look at for backups is the BackInTime application, however I have been generally wary of using anything that does not do straight file by file backups to the destination but instead creates its own archive format. This is not such an issue with commercially supported proprietary software, but I am very insistent that I do not get locked into archive formats with open source software that may not be supported forever. Probably this is the reason why lots of people prefer just to use rsync.

mainpc had an issue a couple of weeks ago when some sort of timeout on one of the RAID array disks caused mdadm to push it out of the array. It was then coming on and off line seemingly randomly for a time, but then stabilised and stayed online. Smartctl testing showed there were no errors on it. After a week it was readded to the array which is now functioning at 2 disks again seemingly with no problems. However as these disks are 4 years old (last replaced 2016) I am currently planning to buy 2 new disks for the array as soon as I can, this will be at the rate of 1 disk per month with the old ones being available for backups although it would still be preferred to buy completely new backup disks. 

The disks in the array currently are WD Black and that is the proposed replacement at $250 each. SSD is considerably more expensive but by the time I next have to replace any of these disks the cost of SSD could well have come down and be more affordable. WD Black SSDs in the size needed are  very expensive at around $1000 regular pricing each (four times the HDD price) and are only M.2 format which is problematic since it is rare for a board to have more than one M.2 slot at least in the price range I have. I have checked and the H-97 and B250M-D3H boards do have one M.2 slot each but not two. There are adapters available but another option is simply to switch to another SSD brand such as Intel which is considerably cheaper although with lower performance, if the performance at least equals standard HDD it would be a big reduction in cost to around $400 each. As I think, in a few more years getting a pair of SSDs for the array will be within the same price range as the regular HDD. I am still keeping the lid on the amount of storage on this computer resisting the temptation to buy bigger disks as in spite of growing photo and general data storage, so far it has proved possible to manage existing storage sufficiently well to avoid running out of space.

Monday 13 July 2020

Building a Linux PC for under $1000 (TechRepublic)

TechRepublic have an interesting article on building a great Linux PC for under $1000. Although it's clear the article refers to US dollars, which in price terms is usually lower, the prices are mostly comparable with what we pay in NZ. 

My last couple of build experiences were different. I built a low cost system using a Gigabyte H110-S2H board, my usual choice of a Pentium G CPU, and 8 GB of RAM for around $300 about 3 1/2 years ago. I think initially it was installed with Windows 10. It was cheap because I recycled a case, power supply and hard disk, so I only had to pay for the board, CPU and RAM which cost about $100 each at the time. The board was taken out since then and put into a different chassis and it has been repurposed for video playback, because it only has 2 DIMM slots. I must have been really strapped for cash at the time as all the other systems I have built used slightly more expensive board with 4 DIMM slots.

Last builds I did were buying a pair of Gigabyte B250M-D3H boards and Pentium G CPUs for a total of about $500, just about 18 months ago. I had planned on getting newer B360 boards and Pentium Gold CPUs, but there was a shortage of the CPUs at the time, and I discovered Dove Electronics, a wholesaler we used to use when I worked in high schools, had only two of the B250 boards left, probably the last stock of this model in the country. I managed to get these shipped to me along with the CPUs and 32 GB of RAM for one of the boards (another $500), then rearranged a whole lot of things. The aforementioned H110 board being taken out of a chassis and replaced by one of the B250s with all the new RAM, and the other one going into a new system with some RAM from the H110 board (this had had 16 GB previously but that was split between it and one of the B250s). 

The oldest thing I have here is a Gigabyte Z97 which is the computer I am writing this on, with a heap of RAM, I can't quite put a date on when I got it, but it is still going strong and probably has years of life left in it unless the board blows up or something. Because it has still got heaps of performance for everything I can throw at it.  The hard disks are the only parts I have had to replace and as one has been a bit finicky lately I might have to scrape together some cash to replace both of them soon which is a bit stiff with my current resources. This system is also the only one of my computers (apart from the Mini-ITX system) for which I have actually purchased a chassis, all the others being recycled from schools. The B250 that had only 8 GB recently got a boost to 16 GB which cost around $70. 

Anyway going from the article, pricing is pretty similar and if you had to buy everything new it soon adds up. The only really different cost I can see in there is for RAM; 32GB at current pricing in NZ is about double what he quotes ($275 at PBTech, which itself is a fair drop in price from what I paid 18 months ago). 

Building your own PC is actually pretty straightforward and well worth while. The most complicated task you would typically expect to encounter is installing the CPU and fan, and I have never had any issues there despite the potential for damaging the chip or board. I usually build my systems on a wooden base on a table (nude) and then install them in the chassis once I have verified everything is working properly. 

Considering that in 1997 my first serious PC cost nearly $4000 from a shop (with a cheap inkjet printer) it is amazing that the cost of parts has come down so much (no allowance for inflation). Even having a shop build you a PC today wouldn't cost anything like that much, maybe $1500 for a system built in NZ. Of course some companies are having systems built to order overseas and then shipped directly to the customer. If you use Windows you can get it for next to nothing with a shop or OEM built PC, the retail edition of it that you can buy for adding to an existing computer being a bit pricey (I have one here for the one and only Windows computer - the Mini-ITX system - that I still use occasionally for compatibility reasons). Running Linux on all my home built systems takes the OEM licensing cost advantage out of the picture completely, besides all the other benefits that Linux brings.

Tuesday 7 July 2020

Rsync Backup System [6]: Latest Progress / Using The Rsync Daemon

Because of issues with Rsync over ssh I am looking into using the Rsync daemon instead. The commentary on that is below. Until I can get that working properly I am backing up each computer individually using its own removable drive bay. This can be achieved by changing the ownership of the backup folder on the removable backup drive to backupuser and then  logging into a virtual terminal as backupuser and then running the backup under that user account. We use that account rather than root or our regular user account on the system being backed up in order to ensure the account has only read and execute permissions on the backed up computer so that it cannot in any way risk deleting files from the computer. This is very important because I have used the --delete flag on rsync to ensure files removed from the source are also removed from the backup. When doing an incremental backup that flag is important to ensure the disk does not end up filling up with removed files and running out of space. But it is also useful to have the safeguard that you can't accidentally copy files from the backup disk over the source files, especially with incremental where your backup disk is already full of files.

The simple means for an incremental backup that rsync supports is using an existing backup disk that has previously had full backup onto it. This enables rsync to compare the source and backup without any additional system to record file details that it can then work out if they have changed. In other words with each source file it compares to the existing backup then only backs up incremental changes. This is quite different from other backup solutions I have used that have a separate incremental backup disk.

To implement this system I will need at least 3 disks for each computer if I want to keep one full backup and two incrementals. In other words each disk will start off as a full backup, then two of them will be alternating incrementals, and a full backup from scratch will be done every few months and kept separately from the incrementals.

I would still have preferred separate incrementals, and in fact it is possible with two removable drive bays, one contains a full backup disk and one contains an incremental disk, then the incremental can be written to the incremental disk instead of the full backup. However the problem with this solution is that rsync cannot produce progressive incrementals since it is always comparing with the last full backup, that means the incrementals will not be a snapshot at a particular point since the last incremental, but will always be incremented from the date of the last full backup. It is also messy working with the extra paths to the different disks in the backup command, and therefore, more likely to risk mistakes like backing up to the wrong disk.

So in summary the best solution is using a separate disk for each backup, which will be an incremented full backup in the case of 2 disks and a separate full-only backup in the case of one disk.

At the moment I am down to 1 backup disk for each computer and so I need to buy some more disks along with the removable bay caddies and storage cases. I am also working on secure storage for the disks in the storage location which is away from the house. At the moment I have only been doing one backup about every 3 months because the RAID-1 array system in the computer is so reliable for day to day backup that I don't need backups too often. But I will attempt to schedule a monthly backup cycle in future.

Since I have been using Rsync as a backup solution, it has been very reliable, but there are occasional difficulties, and one I am having at the moment is the backup terminating with rsync error 12 partway through. I have done some fault finding but haven't been able to narrow down what is occurring that causes this termination but it is probably something to do with the ssh client in the source computer, which rsync connects to. This means I have to either try and find out what is happening with ssh on the source, or try the alternative, which is using the rsync daemon (service) on the source and connecting rsync to that from the backup server.


Firstly as I am on Debian, rsyncd is already installed with rsync. Next step is to configure the /etc/rsyncd.conf file which I will do from this sample provided in that article:


uid = 1001
gid = 1001
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log

[backupuser]
path = /home/patrick/
comment = backup patrick
read only = true
timeout = 300
     
 
There are some changes I am doing from their sample, the most notable being the uid and gid, which are set to the values for backupuser, which is the user that has permissions to do the backup on the source computer. Assuming rsyncd is run as root, this will ensure the permissions are correct. Also I am specifying the module name as backupuser, and it will automatically connect to the backup path for my home folder.

Start rsync with systemctl start rsync and set it for startup with systemctl enable rsync .

So next time I will list the result of testing with the daemon and whether it worked successfully.


Reinstall KDE system with LXQt [3]

This is the third article in a series about problems with KDE leading to a reinstallation with LXQt. Both running on top of Debian 10.

The computer concerned is running two NICs because it needs to access both networks that I have here. I have the main one based on a corporate wireless link from next door, which is filtered and throttled, and a second networtk based on a Skinny cellular broadband network modem, which is not filtered or throttled, although speed is moot since there are limitations inherent with 4G cellular data. The Skinny modem is connected to the local devices through a Ubiquiti AirRouter, which includes a DHCP server and network switch. It also provides the wireless network for phones or tablets. The capabilities of the AirRouter are much better than those built into the Huawei Skinny modem for internal networks and that is why I am using it.

There are some issues with LXQt mainly in panels but apart from that the reinstallation of software has gone well on the platform. One issue with this computer in general is that it needs more installed RAM and I am planning to address that in the next week or two, since I need it to be able to run a virtual machine at the same time as the range of stuff I normally use it for. One issue was reinstalling Gimp, I had difficulty in working out where Gimp keeps its config data as I wanted to copy over the config from my existing computer. It turns out that Gimp uses ~/.config/Gimp as a path, but apparently it also used a Flatpak specific path in ~/.var/ to store some data and there is considerable confusion over why it maintains these two paths especially as the user interface of the Preferences dialog doesn't actually point to the ~/.config path when it appears it should.

One of the interesting experiments this week has been to work with the mounting arms that I use to hold some displays. I have used Brateck wall mount arms with VESA adapters on them for this for some years. The key has been to modify these to allow fine control of the height so that when displays are stacked vertically, it is easy to put them close together. 

This photo shows how the attachment of the arm to the "wall" (in this case a vertical post attached to the desk) has been modified to allow its height to be adjustable. This is done with two right angle brackets which are bolted together using 80 mm length M6 gutter bolts, which gives theoretically an adjustment range of 80 mm.

I have to work around issues with the backup system from one of my computers at the moment. rynsc has not been able to connect over the network between the computers and running the backup locally on a computer seems to be the way to go at the moment.

Wednesday 1 July 2020

Setting up sudo on Debian

Debian is a bit different from Ubuntu and that is the reason I like it. Debian is much more focused for technically advanced people and, for example, has hibernation enabled by default, which contrasts with Ubuntu that makes it difficult to use hibernation, and there are many other differences in Debian that I prefer.

One thing with Debian is that sudo is not set up by default for the first user account and it's necessary to undertake a few steps to set it up. sudo is installed so all you have to do is add your user account to the configuration. This also underlines that under Debian 10, /sbin is excluded from the standard user's path in a terminal window and many commands will not work unless they are prefixed with the full path, something I have not yet got to a full understanding of the rationale for. When you go into a virtual terminal, that limitation is removed and it's hard to know why the restrictions have been put into the terminal window running under the desktop environment, when Debian 9 did not have those issues.

Anyway the steps to set up sudo are pretty straightforward:
  • Use the su command to enter superuser mode in a terminal shell
  • Install sudo if not already installed: apt install sudo
  • Run the adduser command to add your user account to the sudo group: /sbin/adduser <username> sudo
  • Log out and then log back in again
And that's it.

Reinstall KDE system with LXQt [2]; KDE network and display configuration extremely difficult with numerous bugs

As related yesterday I found various issues with KDE on a computer and decided to reinstall Debian 10 with LXQt as the user interface. After completing the installation I attempted to use ConnMan, which is the network configuration manager that comes with LXQt on Debian, to do a manual configuration of the 1.x network adapter to set its IP address manually without using DHCP. However, my experience has repeated that found in another computer where I had used ConnMan in the past, where it has continued to send DHCP requests for this network adapter despite the manual setting and overridden the blank "gateway" setting specified in the manual settings, with the DHCP settings. In this case, the setting that was overridden was the DNS server; it was configuring the system to use the 1.x network's DNS server address obtained via DHCP instead of the 4.x network's DNS server obtained via DHCP for the 4.x network adapter. 

As I have noted the Lubuntu network configuration manager does not have this problem and is definitely the best out of all three. However as I am not sure at this stage of being able to install this on the computer, my next step was to uninstall ConnMan and set the network parameters in the /etc/network/interfaces file. After uninstalling ConnMan, dhclient could not complete any DHCP requests for any interfaces; it seems that dhclient is configured to use settings provided by ConnMan and would have to be reconfigured to work with it. 

The interfaces file entries ended up looking similar to this

auto enp0s31f6
allow-hotplug enp0s31f6
iface enp0s31f6 inet static
           address 192.168.4.173/24
           gateway 192.168.4.11
auto enx00500b60c1eb7
allow-hotplug enx00500b60c1eb7
iface enx00500b60c1eb7 inet static
           address 192.168.1.173/24

In addition, /etc/resolv.conf contains the entry
nameserver 192.168.4.11

After completing these entries the system was rebooted and further checking confirmed the settings had been applied. In addition it was confirmed there had been no further lease requests to any DHCP servers, the lease for the 1.x network adapter (the problematic one that was pushing through unwanted default gateway and DNS server settings) had expired and had not been renewed. 

It therefore appears I have resolved so far the display issues and also the network issues. The only remaining issue out of 3 mentioned in the original article to be resolved is to get x11vnc running. Then there are the separate matters of reinstalling the software that was on this computer before I reinstalled it.

LXQt has some obvious differences from KDE and one of those is the panel configuration is different and at present it will not support more than one panel on the desktop and will not support a panel being in the top position on the lower display, which in the configuration I have with my displays is the ideal place to have just one panel that is shared between both screens. Although a panel can be placed there and used normally, the panel cannot be right clicked and configured using any of the usual menu options for configuration because the menu is truncated. The only fix being to move the panel to the bottom using the panel.conf file, stop and restart the panel and then configure at the bottom then move back to the top when finished configuring. I will have to check with the LXQt project to see if they have any fixes for this problem.

The other key issue of course is fewer widgets available yet for LXQt. The main one I do not have is the disk free space. I can simulate the CPU monitor bars like I am using on KDE (they show CPU usage, memory usage and swap usage) with some of the built in widgets. So apart from some niggling issues related to the differences between the desktop environments I expect to proceed quickly to having this system up and running again fairly soon and to be able to have a more reliable and easier to use system than I was experiencing with KDE.