Wednesday 23 October 2019

Red/green/white handheld signalling lamp

Anyone who is associated with rail heritage knows that on the professional railways in New Zealand, train crews used hand held signalling lamps that could produce red, green and white light, for use when shunting a train. The older style lamps of this type worked with a mechanical rotating filter holder to move red and green filters in front of a torch bulb when a knob was turned on top of the lamp. I've often thought about how easy would it be to make an electronic version out of parts.

When I first started to get interested in electronics, LEDs were only available in red and green, and didn't put out much light. It has only been with advances in LED technology in recent years being able to invent a blue LED to make up the missing primary colour to produce white light, and being able to do it at a very high brightness, that it has been possible to easily make your own lamp that can put out a reasonable amount of light and thus be visible.

To make a handheld signalling lamp, the parts needed are basically some LEDs for each colour, batteries, switches and a case and perhaps a handle. You could use tri colour LEDs that are available and program the proportions of colours that are needed to produce red, green and white, by switching in resistors using a  multi pole rotary switch. You could also use separate red, green and white LEDs to produce the colours. My first thought is to go with the second option. Jaycar has suitable LEDs for about $4 each. I would go for the 10 mm size and possibly at least four of each colour, maybe more, or spread them out a bit. 

For switches it is a question of reasonably waterproof ones and maybe more than one. The most important colour you want to display is red, which means stop as this is the most significant safety action. The main problem with many IP rated switches is they are only SPST. Jaycar does have some momentary pushbuttons that are DPDT. These are important if you want to be sure only one colour can be illuminated at a time. There would be a toggle switch for on/off, and push buttons for green and white, wired (hopefully) so as to cut off power to other colours when one is pressed.

A case would have to be clear plastic, and there are a limited range of these from Jaycar and other suppliers. A power pack would be a battery holder to hold some AAs. 

Well that is the theory anyway and at the moment I'll leave this idea there and spend a bit of time looking in more detail into costs and so forth.

Rsync Backup System [4]: Linux File and Directory Security Using ACLs

As per our series on Rsync backups, we desire to use a different user from the one that owns the home directory in order to ensure they have only the permissions they need when logging in remotely. At this point whilst by default the backup user could access many things on the computer, there were some files that they received permission denied errors for.

Whilst there is a simple file and directory permission scheme built into Linux, the acl extensions are very worthwhile as they give a lot more control, especially with the all-important ability to have inheritable permissions on a parent folder. In this case we'd want to give the backup user read-only permissions on the home directory that are inherited by all subdirectories and files as they are created.

Getting ACL is pretty simple, firstly you need to install the acl package using apt. The second requirement is to see if a volume is mounted with acl support. Use a command called tune2fs -l /dev/xxx (part of e2fsprogs package) and look for "Default mount options". If it says acl in there then the volume will be mounted by default with ACLs enabled. If this is not a default then you'd want to go into the volume mount entry in /etc/fstab and you can add acl to the mount options there. In this case because in fstab I used "defaults" as the mount options, acl is automatically turned on at mount time. If I needed to specify acl then I could change the word "defaults" to "defaults,acl" in the fstab entry to enable ACLs.
ZFS also has its own (more extensive) ACL support which is not covered by this post as we are referring to the existing filesystems which are ext4 on the computers being backed up in this case. I am not planning to change the existing RAID arrays from MDADM to ZFS but it could be an interesting consideration for the future should a disk array be replaced in any system.

Now how to configure ACLs for our requirement? Commands used to set and get ACLs are setfacl and getfacl respectively. To set up read access for backupuser on /home/patrick we need to use a command that looks like this:
setfacl -dR -m u:backupuser:rx /home/patrick

If we then use a ls -l (or ls -al) command in /home/patrick we can see next to the standard permissions a + sign which tells us that ACLs are set. In this case I can see whilst all directories have ACLs set, the files that are in each directory are not set with ACLs. So I would need to run another command to set these:
setfacl -R -m u:backupuser:rx /home/patrick/* /home/patrick/.* /home/patrick/*.*
The two different commands are as follows:

The first one using 
-d setting the default ACL for each directory (the default ACL to be applied to any new file or directory)
-R operating recursively on all subdirectories
-m modifying the ACLs
u:backupuser:r is user backupuser granting read permission.

The second one is practically the same except there is no default applied because it is for existing files and the paths following it are the different filespecs to match. 

Whenever a new file or directory is created in future it will inherit the default ACLs that have been defined using the first command which means backupuser will get the permissions it needs. Multiple paths to apply the ACLs to can be specified but only one user specification can be put in at a time.

ACLs override the default permissions system when they come into effect and so you need to have good knowledge of how Linux permissions work. In particular you need to ensure your user has x (execute) as well as r (read) permissions specified in ACLs for directories in order to be able to traverse directories, otherwise you will keep getting permissions issues. It took me a bit of work to get the ACLs set up but once they are done properly, the advantage is that inheritable permissions are possible, so for my backups, the backup user will have automatic rights to new files and folders that are created in future. In fact for any computer the way to make it happen is to set a default ACL on /home that gives the backup user rx permissions on everything below in future.

With the ACLs properly set the test backup was able to access all the files it needed to and complete the backup. Since I knew there were roughly 114,000 items waiting for backup, it was interesting to watch rsync at work discovering new files (a big increase from the roughly 7000 it had access to the first time). 

The means of doing the incrementals is still being worked out as it is not as straightforward as I had hoped. The log file is not consistent in its format (at least not so far) that would make it easy to determine which files successfully transferred. The option I am tending towards would probably consist of getting a list of files that have changed since the last backup, then getting rsync to copy one at a time, looking at the result code that it returns, and if there is no error then writing an extended attribute of the source file to indicate a successful backup result. The extended attributes will probably be backup.full = <date> and backup.incr = <date> and these two extended attributes will be written by the appropriate backup script. A full backup simply updates the attribute at backup completion without checking it. Whereas for an incremental, we compare the last modified date in the file with the date stored (backup.incr if set, otherwise backup.full) and do the backup if the file has been modified since the last backup date. So I expect to start more testing reasonably soon with developing scripts to perform these tasks.

But so far as the full backups go, it's doing pretty good. I have just set up another full backup and it's working very well. The second time setting it all up was so much smoother. It also looks like ZFS disk compression works very well and has achieved some useful saving but I will check out exactly what sort of improvement I am getting on these disks.

Arlec Plug In Heater Controls [2]

In my previous post in this series I described the PC900 2 hour plugin countdown timer which Bunnings have been selling for some time. The oddity of this product is that it is described on the packaging as having "adjustable switching increments". This is a rather clumsy phrase more worthy of adjustable 24 hour timers and is referring to the fact the timer can be set for any time between 120 minutes and 1 minute each time it is used.

In the same post I referred also to the new THP401 plug in thermostat which Arlec's packaging describes as a "temperature controlled programmable timer". The problem with using the wrong terminology to describe a product is that it sends people up the garden path. The description as above is, once again, carried through into the product packaging. Someone at Arlec needs to improve their knowledge of the English language.

I happened to visit Bunnings again today and one of these was on the shelf at $29.95. The product is, very clearly not a "timer", but a plug in heater thermostat, and obviously a very useful thing to have. The problem is whether people will understand what it does with the wrong labelling.

Tuesday 22 October 2019

Rsync Backup System [3]: Using Rsync For Full Backup

So last time we talked about how to set up a single disk with ZFS to use compression. Having got our backup disks sorted, the next step is to work out how to use rsync to do the actual backups.

rsync is written by the same people that devised samba and is a very powerful piece of software. What we need to do is to first of all set up a dedicated backup user on the system that is to be backed up, that has only read access to the filesystem. Having this is extremely important in case we make a mistake and accidentally overwrite the files on the source filesystem (which is quite possible with such a powerful system).

Using useradd we can set up backupuser in this case with a very simple password consisting of four consecutive digits and then by default it will have read only access to other users' home drives.  This means it can read my home directory. backupuser in this case is used with SSH to access the home drive over the network for creating the backup.

The next step is to look at the required form of the rsync command. There are certainly a lot of options for this. Assuming the backup destination zpool has been mounted to /mnt/backup/fullbackup and the source is on 192.168.x.y at /home/patrick then a good starting point for the backup would be along the following lines:

rsync -arXvz --progress --delete backupuser@192.168.x.y:/home/patrick/ /mnt/backup/fullbackup/patrick --log-file=/home/patrick/rsync.log

This looks to cover all bases needed. The options are -a which means copy in archive mode (preserving all source file attributes. -r which means recurse into source directories, -v which means verbose and -z which means compress during transfers (which can speed things up). --progress means during each file transfer you will see the bytes transferred, speed, percentage and ETA which is useful for large files. --delete will remove extraneous files from destination directories (which means files that aren't in the source). This option is obviously useful when a file gets deleted or moved in the source. -X means to preserve extended file attributes if any exist.
At the moment I am testing this with a full backup of serverpc with this running in a terminal window. There are some issues with some directories not having permission for backupuser which so far has only affected a few hidden folders but will have to be looked into further. Previous backups always used patrick as the user but it is pretty important to have a special backup user with restricted permissions which is really a best practice for any kind of professional computer setup. An example is because a mistake with the rsync command could wipe out the source directory if there was read-write access to it.

After looking at log file options there is this option that we can set the log file format for rsync using an extra parameter and this is what I came up with in deciding what format of information would be useful in each line of the rsync log:

rsync -arXvz --progress --delete backupuser@192.168.x.y:/home/patrick/ /mnt/backup/fullbackup/patrick --log-file=/home/patrick/rsync.log --log-file-format "%a|%f|%M|%l|%b|%o|%U"

Using in particular the log file format option, it seems to be usefully logging the information we need for each file. However the %a logfile option does not actually seem to be recognised by rsync. 

There is one thing that rsync does not do by itself and that is incremental backups to a separate disk. rsync is designed to be able to by default create incremental backups in a separate directory from the one where the full backup is stored, but the full backup directory must be online at the time when the incremental is done so that it knows which files have changed. This is a problem when, as I intend, you want to be able to use separate disks for the incrementals. Here you see the fundamental issue with the Linux file system (at least ext4) that does not have the separate archive flag for a file that NTFS has. The argument goes that it isn't necessary but that archive bit is the way you can tell that a file has been modified since the backup last ran, and increment it. 

There are several possible solutions to this and one of them is to create an extended attribute for each source file. This is possible using the setfattr command. So we could create this on each of the source files following the full backup (it would have to be a separate script process following the execution of rsync). Maybe the script would be part of a verify process that we run after the backup is carried out, to verify that the source and destination files both exist, and then write the extended attribute to the source file. The issue is that it may well prove difficult to be sure the source file was backed up unless we can have a look at a log file and prove that it lists the source files and then feed that into a script that produces the extended attribute. Anyway, the point of this is to write an extended attribute to each source file that says when that file was last backed up. This could be a part of an incremental script that uses find to get all files that were last backed up since a certain date. Or we can just use a range of dates for the find command to get the incremental file list using the last modified file time. This will all be looked at in more detail when I start working on incrementals, because for now I am just doing a full backup like I did before with rdiff-backup.

Monday 21 October 2019

HOWTO: Set font size in virtual consoles

In Debian you can have virtual terminal consoles, which are different from a terminal emulator window. Before Buster came along, a lot of commands could be run in a terminal emulator, but these days, more often than not, commands have to be run in a virtual console instead. The difference is that a virtual console is fullscreen, not a window, and doesn't run under the GUI, which goes into the background at that point.

You can have six separate consoles that you switch between by pressing Ctrl-Alt-Fx where x is 1 to 6, i.e. the F1 to F6 keys. To switch back to the GUI you press Ctrl-Alt-F7.

The next issue is the font size which I tend to find rather too small. Fortunately this is easy to fix. Use nano to edit /etc/default/console-setup and change the following lines to read as follows:
CODESET="Lat15"
FONTFACE="TerminusBold"
FONTSIZE="16x32"

This gives us quite a large easy to read font. 

Run the command setupcon to apply it immediately, it should become the permanent setting.

The fonts are all stored in /usr/share/consolefonts directory so you can look up what is available if you want something different. What I haven't done yet is work out if I can change the colour from white.

Rsync Backup System [2]: Using ZFS For Compressed Backup Disks

So last time I talked briefly about ideas for using Rsync to do my backups. Over time this will gel into a whole lot of stuff like specific scripts and so on. Right now there will be a few different setup steps to go through. The first stage is to come up with a filesystem for backup disks, which are removable, and this time around I am having a look at ZFS.

Basing on some stuff I found on the web, the first step is to install zfs on the computer that does the backups. Firstly the contrib repository needs to be enabled in Debian, and then after that we need to install the following packages:

apt install dpkg-dev linux-headers-$(uname -r) linux-image-amd64
apt-get install zfs-dkms zfsutils-linux

After completing these steps I had to run modprobe zfs as well to get it running for systemd (this was flagged by an error message from systemd). This only works on the current running session. To ensure there are no "ZFS modules are not loaded" errors after rebooting we need to edit /etc/modules and add zfs to the list of modules to be loaded at startup.

Since Debian Buster we have to run a lot of commands in a virtual console by pressing Ctrl-Alt-F1, that previously could be run in a terminal window. This is certainly the case for the zfsutils commands so the following steps need to be run in such a console.

The next step after installation is to look at the steps needed to create a filesystem on a disk and set up compression on it. In this case the existing disk is called spcbkup1 and its ext4 partition currently set up on it appears as /dev/sdb1 which means the actual disk is /dev/sdb.

After deleting the existing filesystem from /dev/sdb the following command creates the storage pool:
zpool create -m /mnt/backup/spcbkup1 spcbkup1 /dev/sdb

whereby the new storage "pool" (just a single disk which is not really what ZFS is made for but it can be done) is set up to mount to /mnt/backup/spcbkup1, is called spcbkup1 and is using the physical disk /dev/sdb.

Issuing the blkid command at this point tells me I have two new partitions. Apart from /dev/sdb1 which is of type "zfs_member" there is also another partition called /dev/sdb9 for some other purpose known to zfs. /dev/sdb1 is mounted up at this point and can be used as a normal disk.

To turn on compression we then use this command:
zfs set compression=lz4 spcbkup1

There is one more question and that is automounting. When you put an entry into /etc/fstab then the default for mounting is automatic. For a removable disk we don't want this, so the entries in /etc/fstab end up looking like this:
UUID=....     /mnt/backup/spcbkup1 ext4 noauto 0 2
whereby noauto as the options means it will not automount. Instead you have to manually mount it before use. Obviously you do this after inserting the disk. This requires me to hibernate or turn off the computer before inserting or removing the disk.

ZFS is a little different. We use zfs set mountpoint=none spcbkup1 to remove the mount point before removing the disk. Then later on when we want to put the disk back in we would use zfs set  mountpoint=/mnt/backup/spcbkup1 spcbkup1 to reset the mountpoint. As we can change mountpoints on the fly without a config file or needing to know a long UUID, this immediately and obviously lends itself to being able to mount more than one disk to the same mountpoint. In other words, I can dispense with a different mountpoint for each of the disks and mount them all to the same path. So for this backup scheme I will have two mountpoints for spcbkup. One mountpoint is for the backup disk(s) for the full backup, and the other is for the backup disk(s) for the incremental backup. These will look like:
/mnt/backup/spcbkup-full
/mnt/backup/spcbkup-incr

where the suffix is self explanatory hopefully
and as long as I swap the incremental disks in and out then the backups proceed automagically.

For this scheme we just have one backup disk for the full backup and we are sharing one backup disk for incrementals for all sources so the paths will look like
/mnt/backup/fullbackup
/mnt/backup/incrbackup

and the two incremental disk pools will be called incrbackup-odd and incrbackup-even which refers to the week number. The backup script will work out the week number itself and work accordingly. The plan is each incremental disk will contain two generations of backups and with two disks, there will accordingly be four incremental generations at any one time. In practice we take the week number and get the modulus after dividing by 4, which will give a number from 0 to 3, then the backup for that week is done into a folder on the disk called mainpc-x or serverpc-x or mediapc-x where x is the modulus. rsync will be set to ensure that any redundant files are deleted from the target at each sync and the week number is also used to calculate the date range for the command so that it knows how to handle file modification times to identify the files that have changed within that date range. The script automatically runs the same day and same time each week for the incrementals. At about every three months a full backup is done on a different day from the scripted day so the script is not disturbed, this amounts with three computers to one full backup each month.

It is important to unmount using the above commands before removing the disk otherwise the system will behave badly at next startup (although the Raidon disk caddies are theoretically hotswappable, in practice I am using them as non-hotswap for various reasons, so the system is shutdown or hibernated before changing disks).

So having worked out how to set up disks, the next step is to work out how to use rsync and I will be running a full backup of serverpc first of all.

Rsync Backup System [1]: Introduction

In February of last year I was looking for a new backup solution (I had used rsync to that point but found some issues with it) and tried a few different things. I have been using rdiff-backups since then (it comes as a part of Debian) but this has its issues too.

At the moment I can't recall what the issues were with rsync and it's looking more attractive as a new option. I think the reason I couldn't make it work satisfactorily before was because I used it over a network (samba) share which it doesn't work well over. But Linux has SSH as a much better option for network syncing which rsync has built in support for and should work a lot better than over a Samba network share. 

The reason I want to ditch rdiff-backup is that I want a system that will do incremental backups by copying the whole files and on a different disk volume. rdiff-backups works on the basis that you are backing up both full and incremental backups to the same volume and an incremental will only be the actual changed parts of files. Unfortunately this creates an incremental that is dependent on the full backup; if you lose the full version of the file then the incremental is of no use to you since you only have the file fragments that have changed, not the whole files that have changed.

The reason I chose rdiff-backup in the first place (over some options like borg, which I could not make work easily) was that it would copy a file structure (directories and files) instead of storing the files and folders inside an archive file. If for some reason the archive became corrupted it could be very difficult to recover the files inside it. There is an advantage that the archives are compressed but another option is to set up your backup disk to compress at disk level.  I tried doing this by formatting my backup disks as BTRFS but then the support for this changed in Buster and I am unsure if there are still issues with it. I will have another look at this especially if there is any option with compression support in ext4.

The idea at the moment is I would use rsync over SSH to back up each computer using a dedicated backup computer with a full backup about every three months and incremental backups about weekly in between. I have 2 TB disks for the full backups and 1 TB disks for the incrementals. Currently a WD Blue 2 TB disk suitable for a backup costs $105, while a 1 TB is $78. I'd need to get some more of the Raidon drive caddies for extra disks and the leather bags that they are stored in.

Rsync or anything that copies files and folders as is is a better more transparent option than using something that uses its own structure and archives that leaves you dependent on it being supported and developed as well. Rsync is very well optimised for syncing and the idea is it can do incrementals on top of existing full backups. rdiff-backup does the same sort of thing except that unlike rsync it will leave the original file unchanged, which rsync will overwrite (when resyncing to the same backup destination). There are a few possible schemes for backup. One possible is to put the backup computer in a remote location (not really possible in a house) and have it sync unattended a month's worth of data at a time then change the disk for another one and let it do another month, it would be copying all three computers' data on the same disk. Another option is two disks to do the incrementals with the disks exchanged weekly. This is the more likely scenario. Probably each disk will store 6 incrementals each at 2 weeks (the full backups happen each 3 months).

Anyway there is a bit of detail to work out and probably some more disks and trays etc to purchase to get this scenario all set up and hopefully it will come together soon.

Tuesday 15 October 2019

How to create a desktop menu entry manually in Linux

This is kind of an odd thing to be posting about nowadays with Linux desktops mostly including a menu editor or installable third party packages like menulibre and alacarte being able to achieve the same outcome. However, LXQt does not include a menu editor that is comparable to the above options, and whilst both of the above can be installed in Lubuntu or Debian/LXQt, the results of running them are confusing. It is possible with PCManFM-Qt file manager to edit applications shortcuts directly in a specific area of the standard user interface but that is confusing and difficult to understand. Fortunately the majority of desktop environments support a standard method of creating shortcuts as .desktop files stored in a standard location.

The .desktop files are created in either of the following locations: /usr/share/applications or ~/.local/share/applications. The difference between these is that the former location is computer based and the latter is user based. For this example we have downloaded Firefox Developer and extracted it from the download file. This does not give us that menu shortcut so we have to create it. Typically we will copy the FFDE files to ~/Applications folder and create a local shortcut in ~/.local/share/applications which has a double advantage at reinstallation time that both the application and its shortcut are located in the user profile and therefore do not need to be reinstalled when the operating system is.

The typical format of the .desktop file looks like the following (in this case the actual parameters used to start Firefox Developer from where we installed it):

[Desktop Entry]
Categories=Internet
Comment=Firefox Aurora with Developer tools
Exec=/home/patrick/Applications/firefox-dev/firefox %u
Icon=/home/patrick/Applications/firefox-dev/browser/chrome/icons/default/default128.png
Name=Firefox Developer Edition
NoDisplay=false
StartupNotify=true
Terminal=0
TerminalOptions=
Type=Application
Version=1.0
 
Some of the above might not be needed.

The key issue, however, is that LXQt (at least on Lubuntu, but probably also on Debian) does not seem to recognise user specific icons as part of its menu. When one was created like the above, it ended up being put into the Other submenu rather than the Internet submenu. Likewise one could be created with menulibre and would be created in the same location by default (~/.local/share/applications) but ran into the same issue that LXQt would push it into the Other section of the menu. I am still experimenting to see how this can be overcome but for now, either method is possible but if both are creating the same outcome then using menulibre is obviously preferred as a GUI.

Friday 4 October 2019

Arlec Plug In Heater Controls [1]

Arlec is an old established Australian brand of electrical products (such as extension cords and plugboxes, timers etc) that has been available in NZ for many years - at least 40 to my recollection, their innovative designs and features continue to the present day.

With plug in electric heaters sometimes certain desirable features that would be easy to fit for a wired in heater, such as a time delay switchoff or a thermostat, are not always easy to find. I can recall various brands having come and gone, for example countdown timers from BenQ and HPM come to mind. Likewise various brands of plug in thermostat. Arlec is now making both types of product and selling them through Bunnings. (I also remember Kambrook plug in timers with some affection from my youth but they have diversified more into household appliances nowadays). I do have still a Honeywell plug in timer fitted with a cord and interrupted phase tapon plug which can be mounted on a wall some distance from the heater and this is still going strong more than 30 years later.

The Arlec PC900 plug in countdown timer is a mechanical timer giving 2 hours delay and simply plugs into a 3 pin outlet and the heater or other device plugs into it. 

The timer appears to do what is expected of it. However due to its design, it will block an adjacent outlet in a plugbox or multi outlet wallplate. This appears to be a design factor with Arlec products in this range (several different types of timer including the PC697 digital 7 day timer).

Newly available from Bunnings is the THP401 which is clumsily described as a "temperature controlled programmable timer". 

So far as I can ascertain (it wasn't on the shelf at the Bunnings store I visited today) this should really be described as a digital plug in thermostat. Because Arlec do not appear to have added this model to their website yet and because there was not one on the shelf at Bunnings I was unable to verify the description of the unit.




Wednesday 2 October 2019

Using a Windows 10 computer as a media player

UPDATE 19-10-7: Due to cost the Win10PC was put into an old obsolete chassis and parked in a corner of the room so it can be used when needed but it will not sit on my desk and will hardly ever be used. This was accomplished using an identical spare G-E350 WIN8 mainboard and the boot disk removed from the Mini-ITX chassis and it works fine. The Mini-ITX chassis was then set up with Lubuntu using the 500 GB HDD that was in it for a data disk for Win10PC.  x11vnc has been installed to enable remote control for maintenance purposes and with Kodi it works a lot better than when it was running Windows 10.

UPDATE 19-10-6: Whilst this computer will do for now I am looking at replacing it when I have the resources with a new Raspberry Pi (the new RPI 4B has a significant performance improvement over the 3B that I already have) or NUC running Linux just because it is getting a bit long in the tooth with performance issues, but in reality most of these issues are related to Windows 10's excessive resource demands on it. The other option being to put the Windows 10 computer into another chassis and release this chassis for the spare board of the same type and run that on Linux so I will consider the options for that.

UPDATE 19-10-3: After the heap finally managed to update itself to Windows 10 release 1903 (a whole drama and story in itself, which has taken months of failing to install the update, and me deciding I just could not be bothered troubleshooting, then finally it managed to install itself successfully last night), Kodi started having playback problems only with videos, every few seconds the audio would drop out momentarily. The solution I found that worked was to go into Settings, Player Settings, Videos, change the rendering method to DXVA and turn off DXVA hardware acceleration.

Back in December 2014 I got an Antec Mini-ITX chassis to use with a Gigabyte GA-E350 WIN8 board that I had purchased. I actually had two of these boards (the other one is stored as a spare). Given it's almost five years I'll elaborate that I originally got these boards, which were very cheap since they were designed for low power compact computers needing a power supply of less than 50 watts and have an integrated AMD E350 CPU. These photos were taken at the time I built the system and are missing from the article that is on the blog and I need to put them back in it sometime.

The bare chassis looking through the grille on one side which is for cooling, through the I/O plate on the back.
 The partly assembled system. The board is installed and screwed in place and the internal cables (mainly front panel USB / switch / LED) are just loose waiting to be plugged into the board. You certainly need the low profile cooler on this board because a full size one would have trouble fitting inside.

 The system assembled and running, with the display showing a setup screen. The extra cables that weren't in the previous picture are for the hard drives.
 On the back of the chassis under a blank panel is the mounting plate for the 2.5" hard disks (there is room for two of them). You can see one is installed here with the power cable hanging loose in the 2nd mounting position. Behind the HDD is the bottom of the motherboard. The mounting plate is detachable because the disks sit on the inside of it and there are insulating pieces supplied to prevent the HDDs from shorting out on the metal mount plate.
Looking inside the complete system installing Windows. The cables have been tied back in place ready for the lid to be put on. The power input cable, which is for 19VDC from an external power supply that is supplied with the system, can be seen upper right. There is a small power supply board inside the chassis that generates the usual voltages (+/-12V +/-5V +3.3V etc) at up to 60 watts for the system with the usual ATX power connector.
Anyway if you followed this blog you'd know that this system has ended up as my one and only Windows computer and now has 10 Home running on it, and until recently was hardly ever used. But now I want to get some use out of it so I've been setting it up as a media player computer for bedside use, which means mostly playing music or playing videos with the screen turned off. The main issue with an older system like this one is the CPU won't have the codecs built in for playing a lot of video which means it can't handle the higher resolution or high bit rate videos well, since video playback on most operating system these days depends on the CPU doing hardware decoding, a key reason why I upgraded a Sandy Bridge system last year because it had trouble playing back WEBM videos. In this case for the use this system is being put to, that isn't a problem, and Youtube videos that stutter can be changed to a lower bit rate in the browser.

So what software do we want to use compared to what we can use in Linux? Kodi is available on Windows and it worked fine this time. I had trouble with it on a previous Windows installation where there was no sound. Windows Media Player has been out of Windows for so long that I have almost forgotten what it was like, and the alternative app built into 10 is nowhere near as good as Kodi. The other main application needed is a sync tool to sync the video and music libraries from another computer. Since I use NFS for my day to day networking stuff in Linux and MS doesn't make their NFS networking stuff available to Home Editions I had to set up a Samba share on the mediapc and network using the built in SMB/CIFS support for Windows. Karen's Replicator is the ideal program for syncing. I used to use this to back up my Windows computers back in the day and it does everything needed to get a full mirror that also replicates deletions on the source.