Tuesday 1 November 2011

How to write a ranking query in MS Access

One of my little jobs here at work is to produce a report for two year groups of students that ranks their results on three different subjects that they take. The easy way to do this with MS Excel is to use the RANK function operating on a whole set of results and it works out the ranks and it can put the result alongside each student’s name. Going from Excel to Access requires us to write our own ranking function as there is not one built in. This type of function is called an Aggregate (which operates on a whole set of results i.e. all rows of a query, instead of one row at a time as you would normally work on for a custom column) and the query that it is used in is called an Aggregate query. In other words you need to Aggregate together all the data in the column you want to work on (all of the students’ results) and work out a ranking for each row based on that aggregation. Aggregation queries are typically using functions like Sum, Count and so on. Also called a Totals query.

Another way is called the Domain functions in Access. You can write your own custom Domain function. Basically in such a function you will use DAO code to run a subquery on the database. In other words you aggregate all of the data inside that DAO procedure or function and then do the comparison with the value that you were passed as the function argument.

In this case I was able to find out how to do the ranking in MS Access directly using SQL. You can enter this into the grid if you know how, but it is probably easier to type into the SQL box directly and Access will turn this into a grid format. The first thing you need is to create a query or table that contains the data that you want to rank, along with the primary key for each row. The next thing is to create your ranking query. The way you do this is by joining your source dataset onto itself with the Count function. I used as my source this useful page here: http://www.1keydata.com/sql/sql-rank.html
Based on their example my query looks like this:
SELECT OG1.Row, OG1.Data, Count(OG2.Data) AS Rank
FROM [PA Year 10 Maths Overall Grades] AS OG1, [PA Year 10 Maths Overall Grades] AS OG2
WHERE (((OG1.Data)<=[OG2].[Data])) OR (((OG1.Data)=[OG2].[Data]) AND ((OG1.Row)=[OG2].[Row]))
GROUP BY OG1.Row, OG1.Data;
OG1 and OG2 are aliases for the same table. Since we are ripping data out of Musac, this data is in the Cells0 table which stores numerical columns. Row is the primary key (student number).

As it is written above, the main issue is that two equal results get a rank that is the higher value rather than the lower value, e.g. 1,3,3,4 instead of 1,2,2,4 when the 2nd and 3rd ranked grades are equal. Most of the time people would want the second type of output instead of the first type of output. This turned out to be quite a simple fix, and that was to replace <= in the first part of the WHERE clause with < instead.

The next steps we need to look at are generating the word “equal” if two students have the same grade, and generating suffixes to the grade itself so that we can output a line that looks like “Congratulations on a 1st equal place” or “Congratulations on a 3rd place”. You can pass the grade into a simple function on each row to generate the suffix. On the other hand to get an equal, I used to use the CountIf function in Excel. The way to do this in Access is similar to the above:
  • Create a new query
  • Add the above query twice
  • Join the two source query instances on the Rank field
  • Select the Row, Data and Rank columns from the first query instance
  • Change the query to an aggregate and select the Count function on the Rank field of the second query instance as the fourth column. The result will show the count.
All we then have to do is pass this column into a custom function to output another column that states “equal” if the count is greater than 1. As it happens we can create yet another query to include the results of the above query, otherwise it’s a little difficult to pass a results column in this aggregate query into another column’s formula.

Friday 28 October 2011

The Cult of Apple (or insert your favourite IT company name)

Some people find these subjects extremely divisive. Nevertheless it is true. The Apple Messiah just died. The company will never be the same without him, like they weren’t when he bunked off the first time in the 1980s. This isn’t just an Apple thing. There are lots of companies that fit that model in IT, and many of them have driven the biggest advances in computing technology. The driving personality cults of their founders or key people seems to be what gives them the ability to be at the forefront of their field. But this ability often only lasts as long as that leader is running the show because of that personality cult component.

I wrote this one because in Christian circles, we find cults teach people the wrong ideas about God. Is there a parallel into secular cults? Are the values imparted by the leadership of secular entities like Apple, the best ones for our society? Apple is based on a proprietary hardware model, protected by highly controversial IP patents with aggressive legal action against anyone who tries to compete. I think the whole patent idea is highly flawed to the extent that it is being used today with software. These are all elements of control and it’s undoubtedly true that control is a highly important component of a cult. It’s easy to argue there has been a Cult of Microsoft, which has been similar to Apple. Even though they don’t own their hardware platform, they effectively got control of it through licensing practices that commercially disadvantaged competitors. The Cult of Open Source is more of a communal type of cult than a corporate one.

I think that once these cults have reached their peak, which most of them do once their founding leader is out of the frame, they will fall out of the frame, and new cults will rise up and replace them. In years to come, we will wonder what all the fuss was about Apple Computer or Microsoft, or open source.

Wednesday 19 October 2011

Built in obsolescence

When I specced out the upgrade of my home computer I decided on the Intel DG41RQ mainboard, which is nearly the same as the DG41TX mainboard that my new (at start of this year) work computer has. The DG41RQ is specced for DDR2-800 RAM with two slots and a maximum supported level of 8 GB. However, DDR2-800 RAM is economically available only to a maximum of 2 GB per DIMM unless you want to quadruple the price to get a 4 GB DIMM. I don’t know if this situation will change for DDR2 memory. If it doesn’t then this board is effectively limited to 4 GB maximum, of which only 3 GB is effectively useful in 32-bit editions of Windows. I could install Windows x64 on my home computer and get a boost of 800 MB of extra memory available, but most apps I use at home are 32 bit and due to the emulation layer for x86 code, they actually run slightly slower on a 64-bit OS. As well, the 64-bit edition uses up more memory when loading the 32 bit applications than the 32-bit edition does. The result is, that I just can’t see the point of going up to x64 Windows at the moment. On the face of it the DG41TX is more up to date because it can use DDR3 RAM which comes in up to 4GB sizes at present. However this board is limited to 4 GB maximum.

The bigger concern is that the DG41RQ board does have this effective limit of only 4 GB of RAM. When I got my previous system 5-6 years ago it came with 512 MB, which in the course of its life got upgraded to 1.5 GB, partly easier to do because it had four slots, but also because the previous edition of DDR came up to at least 1 GB per DIMM. As such going to DDR2 and finding there is an effective limit of 2 GB per DIMM is a pretty poor advance, considering boards with DDR2 were being installed in new computers only a year ago. It means this board can’t really have any more memory put in it. Ideally I would like to go up to that 8 GB and put in x64 Windows, but I can’t do that without replacing the board and CPU. When I specced the board, I knew as LGA775 it was pretty much the last of the line; the newer generation LGA1156 was available, but more expensive. This is just something you have to be careful about. What it means is that I probably will have to upgrade this computer in three years’ time instead of five.

These are the annoying tricks that manufacturers like Intel pull on consumers all the time. The technology gets obsolescent faster and faster these days.

Saturday 15 October 2011

Windows 7 Folder Redirection

One of the more useful features for server admins on Windows is folder redirection. Basically this is a feature in which the user’s key folders such as Documents, Music and Videos can be redirected out of their user profile and into a fixed server based location. Taking the data out of the user profile speeds up login/logout times significantly.

The Application Data or AppData folder is one that can also be redirected and in previous versions of Windows this was a useful feature to have. However, in Windows 7, Microsoft made a bizarre and stupid decision to take the user’s start menu and other shortcuts out of their dedicated folders within the main part of the user profile (for example C:\Users\username\Start Menu would be the equivalent of the XP folder path for their start menu) and put this into the AppData folder’s Roaming subdirectory. The result of this is that what was a previously functional capability of having application data (which for the most part is invisible to the user) has turned into having the start menu (which is quite important and visible) redirected as well.

Whilst I haven’t investigated what this means, the fact that on my computer I frequently have problems with Taskbar icons that disappear (or rather the message is displayed “Can’t display this item, it must have been removed”) is most likely, in my view, related to the Start Menu redirection stupidity implemented by this design change in Windows 7. Redirecting plain old AppData is useful because a lot of this data does build up to a significant size over time and therefore is best taken out of the user profile. Redirecting the start menu, which doesn’t take up a lot of room, and is going to be affected by network disruptions, or perhaps a server going offline, isn’t a good idea. It’s another one of the changes in Windows 7 that Microsoft has foisted onto users without fully considering the implications.

In order to get rid of the impact of this decision (the disappearing taskbar shortcuts) I am going to have to stop redirecting AppData and therefore ending up with a larger profile and slower login/logout times as a result.

Thursday 1 September 2011

Are major peripherals becoming disposable?

Here’s a conundrum. I can buy a new printer from the likes of Brother for around $99. The official Brother drum for this printer costs $150. Maybe instead of buying that new drum I should just throw the printer away each time the drum wears out. The same goes for projectors which now typically have a very long bulb life up to 5000 hours. The issue there isn’t the cost of a new bulb, but rather the fact that after 5000 hours of use the projector could be considered worn out, or near to it, or uneconomic to justify the expense of the bulb.

When we get into more expensive printers, the cost of consumables is less proportionate to the cost of the printer, so it is more worthwhile to replace the items like a drum with new ones. Especially where you can buy third party consumables for a printer, as you can for most Brothers, since they are one of the few printer manufacturers who have not chipped and patented their consumables. This is all driven by cutthroat competition between printer manufacturers, to the extent that new printer prices are so low the manufacturers are selling at a loss. But we are the losers because we go out and buy a whole new printer instead of a new consumable and therefore are throwing away a large bulky printer which is bad for the environment.

At our school we buy Brother and the important reason for that is that their consumables are much cheaper. So are the printers. That is provided you can get a good performance from them. Lately I’ve had a lot of trouble with Brother’s drivers. I solved that by using HP drivers with the PCL5 compatibility. This works only as long as PCL5 adequately reflects all the printer’s features. And the printer is misidentified as to its capabilities. So it is a bit of a mixed bag all round.

Wednesday 3 August 2011

Windows 7 Printer Installation Problems SNAFU

We have had a huge saga with printer installation when trying to automate that and control these through logon scripts or the various Group policy settings. Some way back a few years ago, in the days of Windows Server 2003 R2, Microsoft added the Print Management MMC tool along with the ability to manage printers using Group Policy. Later on, after buying out a company, Group Policy Preferences were added as a means of managing printers through Group Policy. Each of these has its own problems. The most serious problem you will encounter with Windows 7 is the dreaded hang when processing Group Policy Printers Policy. We have had numerous experiences where this simply freezes. I can leave a computer and come back the next day and it will still be stuck on this screen. This is a very debilitating experience for the user who cannot log on to their computer. All that will happen is that there will be entries in the events log talking about a long time spent processing Group Policy but when you try to find help to solve this problem there will not be any to find.

Microsoft has made a complete dog’s breakfast of centralised print management for domain administrators, although the early Print Management MMC approach that came in Windows Server 2003 R2 was a promising start. It is mainly the issue that the limitations of this, in particular removing printers which does not actually work properly, has been compounded by the further problems when using Group Policy Preferences, which leads to the situation  mentioned above with the logon freeze. In short the possibilities of being able to effectively centrally manage printer installation in Windows 7 have been kneecapped by what is now becoming a typical Microsoft cost cutting approach to resourcing these capabilities adequately. Therefore the only real option is to return to login scripting to install or remove printers, although this can also be managed through Group Policy settings.

I have started out by using a Powershell script, as Windows 7 / Server 2008 R2 has built in support for these in GPOs. Here I ran into the dreaded black screen after logon where the computer appears to freeze. Using some of the more reliable GPP settings like pushing across the registry key for DelayedDesktopSwitchTimeout and setting login scripts to run synchronously seems to be doing the trick after a lot of mucking around. However you should also check the event log on target computers in Microsoft-Windows-PrintService and if there are a lot of errors referring to malformed registry keys for specific printers then go into HKEY_CURRENT_USER \ Printers \ Connections and remove the subkeys for specific printers. In my test case deleting all these keys stopped the errors from appearing in the Event Log and allowed the printers being mapped in the login script to appear in the Printers folder.

The next step will be to have more printers being added in the login script. Just why this appears to be successful when GPP Printers does not work isn’t particularly clear. For us as system administrators a series of experiences where Windows 7 technologies appear to be lacking proper resourcing or support from Microsoft demands we respond with a “back to basics” approach and revert to tried-and-true, in this case logon scripting (which I thought had gone out with the ark). I don’t consider myself to be a programmer but that script will be getting updated and added to in future in order to make sure that printer installation and configuration can be relied upon. This is very important because incorrect printer installation will cause a lot of strange errors and problems with lots of everyday applications. I have seen these numerous event log errors before and deleting the keys from the Registry is not a new experience. I wrote about it here in fact. And looking back at that situation it is by remarkable coincidence (NOT!!!!) that the latest problems involve another Brother printer…

UPDATE: The saga continues because even though we have switched the server and clients over to driver isolation, the print spooler is still failing. The types of events now being logged are an inability to load the Brother print processor, even though it is not actually being used. In order to deal with this some registry keys that cause print processors to be loaded will need to be removed. I hope this fixes the problem as we have already paid for this printer and don’t want to have to return it. Here is the info about the registry keys:
The summary of this is to look for and remove registry keys under HKEY_LOCAL_MACHINE \ System \ CurrentControlSet \ Control \ Print \ Environments \ Windows NT x86 \ Print Processors. There will be subkeys named for each print processor in use in the system. The content is part of a much larger article on how to debug print spooler crashes. I hope this will be successful as the only other option is to get rid of the particular printer.
UPDATE 2: The above didn’t work in this case, as the registry keys didn’t exist. Although we are using x64, a search of the registry for the BrPrint processor failed to find its key, and there are no recent events for this print processor’s DLL failing to load. I can only assume with applying the isolation settings, the Brother print processors are not actually being loaded.

The solution now being tested is to do away with Brother drivers altogether. Most printers are compatible with HP drivers, and this printer turns out to work satisfactorily with the HP Universal Print Driver for PCL5. So far all testing is good. If it all turns out then all the Brother 64 bit print queues on our print server will be changed over to HP drivers.

Friday 15 July 2011

V2P [4], Thin PC [4]

A couple of weeks ago I wrote about how our laptops which were set up with native boot VHD would all have to be V2Pd because VHDs on the native boot system tend to get corrupted more often. This week it is time to put that into reality by getting started on the 14 laptops which will be continuing over the next couple of weeks. The first step is to back up the VHD. Then we can mount it to a drive letter using Diskpart commands. We then use ImageX to capture it in place to a WIM. After ImageX is complete we apply the image to the same partition where the VHD was. We then remove the existing BCD from the boot partition (easiest just to format it) and then use bcdboot to create a new boot configuration on the system partition. Then we boot and the laptop should be exactly as it was.

The more experienced of you will probably say I left a step out, and I did. That’s because it shouldn’t have been necessary to sysprep the VHD because we were just moving the image from one partition to another. And because MS have limited the use of sysprep to a maximum number of times it can be done, I try to use sysprep a minimum amount. But regrettably sysprep has proven necessary in this instance, because Windows 7 will detect the subtle changes in the hardware environment and lock down the computer and say it is not genuine and there is absolutely nothing you can do to make it work normally. And with all of the negative comment I have written about MS in the past couple of weeks, I have to say that this adds further to that viewpoint, as does the rest of this post. I did get that computer working but it was a real mess to have to go through it all again after having sysprepped it and that should not have been necessary, it was totally unnecessary but that is another example of the MS mentality.

The second thing I have been working on lately is Thin PC, the latest effort being to see if it can run a very old software package we have which is called Successmaker 5.5. This package has been around our school pretty well for the last 6 years and we started with it on Windows 98. The installer that comes with it had some trouble on Windows 7 and it wasn’t totally due to lack of elevation, there were some of the things it was doing that just wouldn’t work at all for whatever reason. So I tried another tack. I built up a Windows XP machine and ran Ghost Autoinstaller (AI) on it to capture the machine state, I then installed SM on this machine, ran AI Snapshot again and built an AI installation. Then I went over to the Thin PC machine and ran this installation. I then had to customise some config files and then I tested it and it worked properly. So it looks like we can load up some machines with this version of 7 and have them running this old legacy package.
I then decided to capture the installation with sysprep and this is where I ran into problems. On reboot there was an error during installation, some problem with the product key, and Setup threw a fatal and told me to reboot. Well of course that did not fix the problem at all. This became another unrecoverable setup error like others I have seen before. The only fix is to remove the image and completely replace it. As you would understand this means I have to build the image again from scratch (as my pre sysprep image turned out to be corrupted). It’s becoming abundantly clear that Windows 7 Setup can’t actually recover from many errors and even if MS fixes the problems like this they have bought themselves another bad rep with OEMs and organisations which use imaging.

UPDATE: The one good thing about reimaging the laptop with a sysprep was that all I had to do after setup finished was join it to the domain. It picked up the existing user profiles and settings that were already on the laptop without problems. So it looks like to speed up this process, because I had to repeat nearly all the steps from the beginning to implement the sysprep, that we will just back up the VHD, then sysprep, then image etc. But sysprep would not have been necessary if some idiot at MS had not decided that we will make an image totally unusable instead of giving a user the chance to activate it again. And the same mentality exists when a setup fails and forces you to completely throw away your image and start again.

Thursday 14 July 2011

More problems with Windows 2008-R2-Vista-7 security elevation

Last week I wrote a rant about the changes MS has made due to its security elevation model implemented in 2008/R2/Vista/7. The post covered what is effectively turning the domain administrators group into a lepers colony, by effectively implementing built in and irrevocable Deny permissions to the administrators group on a computer or server.

Today I’ve just discovered another problem – along with the explanation that it is “by design”. This is the issue that when you elevate a process to administrative rights, it loses access to mapped network drives that it would otherwise have access to on your computer. I already knew this happened at command prompt level, but wasn’t prepared for seeing it occur when I tried to elevate a setup process that also happened to be running on a mapped drive. Although the setup process was able to elevate, it couldn’t find the drives including the one it was being run from (LOL). There is a means of working around this problem as described here.

Whilst I don’t regard rants as an effective means of communication it has served to make the point that these changes which MS have implemented in Windows (along with many others in their service model) have, in my opinion, significantly diminished their credibility to be able to claim they have produced a credible, professional grade product suitable for use by large enterprises in all the situations that would reasonably be encountered in large sites, or multiple sites. MS’s response to the increasing competition they face in various levels has not been to produce a better product, but to slash their costs and service levels, and find new ways of stamping out the competition.

Thursday 7 July 2011

!@#$%^ Windows stupid ownership / permissions changes in Vista/Server 2008

Clipboard01
This is a response to the above message which zillions of system administrators world wide hate seeing on their server console. These messages were introduced as a new “feature” of Windows Server 2008, along with the changes that cause them. Microsoft arbitrarily brought in different meanings of ownership in Vista/2008 that are different from XP/2003. In Vista/2008 the ownership of a file or folder has precedence over permissions that are assigned to parent folders. For example in a home folders share, where individual users have created their own home folders or have had them created by an automated process, they are automatically the owner of those folders. Even if the administrator has full control over the parent folder this ownership blocks the normal inheritance of permissions. While there may be situations where an administrator should not have access to users’ home folders, this can already be catered for within the existing mechanisms for setting permissions on a parent folder and assigning them to different administrators, rather than imposing a one size fits all solution based on a Big Brother idea of dictating to organisations how to run their own file server in their own organisation.

Now, a solution to this is to change the ownership of all the files and folders in a location. Make the administrators group the owner and that will fix all these problems? Actually, it won’t. The second change which came about in Vista/2008 is that the administrators group in general no longer has the same authority over the server as they used to. Everyone has seen innumerable messages telling you that unless you tell something to run as administrator, the fact you are a member of the administrators group does not actually give you the rights you should normally have to do something. The implication of this for ownership is that changing the ownership to a group actually does not work. Changing the ownership to “administrators” group does not overcome the problem of getting the above message in the slightest. Windows basically will not honour those settings unless the ownership is changed to only one user. This means that a group of administrators cannot administer files because only one individual user account can be the owner of the files at any one time. Likewise you cannot grant other users administrative permissions to a file share because they are blocked by the ownership issue on the files and folders in it.

These features might make sense on a desktop computer used by only one server. They don’t make sense on a server where an administrator has to be able to manage files. For example we have scripted backups using Robocopy. It is common to see “Access denied” messages in the logs from running these scripts, purely on the basis of this arbitrary ownership change.

Why has this happened? MS has come up with the cheapest and simplest for it solution to all their massive security headaches and put these changes in without asking users what they wanted because all that matters is getting the bad publicity about security breaches off the front pages of newspapers. Some way back I wrote a hard headed post about all the ways that Vista lies to users. These faults to some extent were fixed in 7, but not in Vista. The solution, always, fork out more money for a new edition of Windows. A pattern that is becoming more and more common in Windows these days. Customer service has gone out the window.


As aside: What happens when you click Yes to the dialog box shown at the top of this thread? Windows automatically assigns you permissions (Read and Execute only) to the folder in question. Windows has to do this even if you are a member of the Administrators group and have already inherited permissions to the folder, and your user account you are logged onto at the moment is a member of that administrators group; in other words, you can’t use a group to manage security permissions for a resource any more unless they are not some of the built in administrative groups. I haven’t quite figured out yet if I can make up my own group of administrators and give them permissions, but so far everyone seems to be tainted by association with the membership of the Administrators group. 
 
By way of more testing I have confirmed that if I give permissions on the folder to individual user accounts then all the permissions work. If I create my own group and make my administrative accounts members of that group and apply permissions for that group, they don’t work. It is like MS has forced a Deny full control by default to the Administrators group. You can have read only access but not full permissions unless those permissions are granted to individual user accounts only.

None of these changes make any sense, nor does Microsoft appear to have any concept of accountability for them.

Wednesday 6 July 2011

Windows Thin PC [3]

Today’s Thin PC experience has been to set up a computer at work to see what I can do with it. I wrote a custom shell in Delphi 5 which has a very simple task of launching Remote Desktop Connection and passing a RDP file to it. This means the user doesn’t have to do anything. By enabling the RD SSO settings in Group Policy, they don’t have to specify their username and password to log on to the RD server, instead the username and password they used to log onto Windows are used. The thin PC is joined to the domain so all the settings are configured using Group Policy. So when the user logs on to the computer, they don’t get the standard Windows shell; instead, they get connected automatically to the RD Server. When they quit Remote Desktop Connection by logging off, the only option they have, the custom shell detects the quit and automatically logs them off the computer as well.

There is a bit of tweaking still to do but overall this is a very positive and worthwhile experience getting this working.

Tuesday 5 July 2011

Windows Thin PC [2]

Following on from today’s earlier post, I have found out that WTPC is supported by WAIK, which allows for a custom image in the same way as we would set one up with Windows 7 Enterprise. I also found out how to get Remote Desktop client to start automatically by replacing the shell. However I will probably have to write a custom shell (a fairly simple task) to provide a way to shut down or log off when Remote Desktop is closed, as otherwise the user is left with a blank screen.

We can also use Group Policy to set up Single Sign On for Remote Desktop. Therefore our thin PC would be domain joined and the user would log onto it using their regular domain username and password. It would then automatically start up Remote Desktop Client and use their credentials to go straight through to the remote server, so the experience would be pretty seamless.

The next question is whether logging off the session can cause them to be logged off Windows, as well, or whether the custom shell would have to detect their logoff and then automatically log off Windows.

Windows Thin PC

Recently I was logging into the MS VLSC and I noticed a MAK for Windows Thin PC. So naturally I wondered what this was and took a closer look. Windows Thin PC was put on general release on 1 July (five days ago) and it is essentially Windows Embedded Standard (2010) (i.e. Windows 7 SP1 embedded edition) made available to SA customers (which includes all NZ schools that are signed to the MOE Microsoft agreements). The installation onto my old home PC, which is six years old and pretty limited these days (for example it can’t run 64 bit), was as easy as it gets. You will need the minimum specs that have been around since Vista, the most important is 512 MB of RAM installed.

Although MS recommends a WDDM driver (i.e. Vista compatible), as is the case with Windows 7 generally, the drivers that come with it do a pretty good job with a lot of hardware. The motherboard in this case being an old Intel D915GAG – one of those infamous “Vista Ready” motherboards that didn’t have a native Vista driver and therefore can’t do Aero. For this type of application that doesn’t matter. However this MS driver doesn’t support widescreen resolutions so ensure your displays have the 4:3 aspect ratio or just put up with a slightly fuzzy picture. Drivers were installed by Windows for the other devices and everything works as expected.

Windows Thin PC on closer inspection turns out to be similar to other thin client editions of Windows I have used, with limited features and functionality. However the considerable advantage is that Microsoft supports it, so you aren’t limited to the functionality or support level of an OEM. I got my fingers burned when I bought a HP thin client box that had support for RD Gateway which I thought would let me log straight in from home on it. Only to discover that some of the components needed weren’t installed and HP didn’t make them available so that was a waste of time/money.

The main application I could see us using Thin PC for is a Remote Desktop client. I assume that all our Thin PC machines will be domain joined and have an automatic logon to a shared account. The desktop will be completely locked down and the only allowed application will be the Remote Desktop client which will automatically load and come up. The user is then logging in remotely using their own username and password, to a Remote Desktop server, which gives them the desktop they are normally using. We have done testing with Remote Desktop configurations with student accounts and also with hardware thin clients so it is a pretty standard lite computer configuration used in lots of educational settings.

Well, as it happened, the Remote Desktop experience was pretty much what I expected. It fully supports the functionality of RD Gateway and the like, and I have now got this Thin PC at home that I can log in to the school’s network on, although this is mainly just for demo purposes as my regular home PC can also do this of course. Thin PC is the replacement for Windows Fundamentals for Legacy PCs, which was based on XP Embedded rather than Windows 7 Embedded. WFLP is quite old having come out five years ago. It will be interesting to see if imaging can be used with WTP, although I doubt this is really necessary because the default installation does the job and nothing needs to be installed on it.

Monday 4 July 2011

How to fix Windows 7 logon error: “The Group Policy Client failed the logon. Access is denied.”

I have seen this particular error several times with Windows 7 and Vista and we are not helped by a lack of documentation from Microsoft for this problem.

In my case the most recent instance of this occurred when I had to drop and recreate a user account. The first time I got the error I tried deleting the local profile, deleting the server profile, giving the user administrative permissions on the laptop, you name it. In some instances instead of getting the above message the user would appear to log in as normal until “Preparing your desktop” then they would be logged out with no further explanation.

After a great deal of frustration I came across this helpful page and adapted the instructions to my situation and the problem is solved. Here is how I applied the steps:

  1. Start up Regedit on the affected computer
  2. Go to HKEY_USERS
  3. On the File menu click Load Hive
  4. Go to the folder of the affected profile and open NTUSER.DAT
  5. Name the new key e.g. Profile
  6. Right click this key and select Permissions
  7. Select Advanced
  8. Add the account of the user whose registry this is and give them Full Control and replace permissions for Child Objects with inherited permissions from this object.
  9. On the File menu click Unload Hive
  10. Close Regedit.

In this instance at Step 7 I found the SID of the previous user account had full control.

I still don’t have the foggiest idea why the new user account didn’t get the permissions assigned – how does this happen I have no idea. But it’s been a long day and time to go home.

FOOTNOTE: This all went well until I tried logging in the user on the Remote Desktop server – which picked up their new roaming profile, instead of the local one on their laptop (naturally), and threw the same error. To cut a long story short I had to repeat the steps on the server copy of the profile. Since this profile was created new, I don’t have any clue as to how the incorrect permissions got set in its NTUSER.DAT file.

Friday 1 July 2011

Native VHD data integrity issues / V2P [3]

The first thing to say is we are now moving to implement all of our deployments which have been in VHD, to physical i.e. V2P. This includes all computers such as desktops, although being networked with users’ personal files redirected to network shares, they are not as critical as laptops which all have users stuff in the same VHD. Simply put we are finding with desktops that there is a higher incidence of boot failures with VHD indicating we are perhaps pushing the technology beyond what was expected of it.

However this doesn’t get away from the greatness of native VHD as an image development/build scenario because you can still do that development process based around native VHD and then deploy to physical. To do this is currently a two step process using ImageX, mount the VHD to a drive letter, capture it with ImageX, then apply the WIM to the target using ImageX. What I am hoping for in the future is that Microsoft will come to the fore and change ImageX to work directly with VHDs so we don’t have to have WIM and VHD versions of the same image.

I wrote further back that I had figured out that we only need to keep pre-sysprepped images and sysprep them on each machine at deployment. Now our remaining post sysprep images will be getting wiped soon so I have enough disk space to store the WIMs I have to make of the deployment VHDs.

Compared with our native VHD deployments to things like a computer suite it actually takes no more time to deploy with ImageX from a network share than it does with NativeVHD and you do save the time it takes to copy the WIM locally to the target platform by getting ImageX to pick it up from a network share and apply it at the same time, this therefore is the equivalent of the VHD copy to target stage. The rest of the steps take exactly the same time as they would for VHD. You run BCDBoot the same as you would for virtual except giving it a different drive letter perhaps. In due course I will have scripts set up to run all the various steps including the ImageX step maybe.

The good thing for us is that the same technologies are used to prepare VHDs for deployment as can be used with ImageX WIM images and therefore there is an easy transition between the two. As Microsoft have given us this great technology for image testing and development, since it really is only suitable for test environments, and since they have integrated capabilities to mount VHDs in Windows 7 and Server 2008 R2 GUI as well as command line (Diskpart), I am quite hopeful they will come to the party with ImageX enhanced to work directly with VHD so that these images can be deployed to physical as this is what ImageX does.

Wednesday 29 June 2011

Native VHD data integrity issues / V2P [2]

Last time around I posted on the issue of native VHD data integrity considerations. Native VHD is a very fast to deploy imaging solution. However because having your whole hard drive in a single file increases your vulnerability to disk corruption which could make it much more difficult to recover data in case of bad sectors, I do not recommend Native VHD for computers on which the user will store all their own data, unless that data is stored on a different disk partition.

There are a couple of scenarios for addressing this:

  1. Move the user’s profile storage onto a separate disk partition. Telling Windows to use a different drive for the Users folder should actually be possible by using the MKLINK command, which creates what is known as a Directory Junction. There is plenty of documentation on how to do this on the web. However my primary concern is whether this scenario is officially supported by MS. There are some scenarios where moving directories that are expected to be put into C drive can break the installation of service packs or updates. So this might not be the best option, unless you have tested it and are completely sure it works.
  2. The second scenario is simply to do a V2P, converting the VHD to a WIM file for final deployment. At the moment this is what I am doing for a trial. We will partition the disk just as we do with VHD boot, but instead of copying the VHD to a partition and setting it up there as the boot drive, the WIM file will be applied with ImageX to a partition and BCDBoot will, instead, have to be invoked to cause the computer to boot from this different partition.

The scenario for a V2P is pretty easy to get started with. Simply mount a VHD to a physical computer. In our case I have a virtual server that has the image VHDs stored as files within its own virtual hard disk. Without shutting down that virtual server I can go into Disk Management and attach a VHD, which makes it accessible through a drive letter. I can then start up a deployment command prompt as an administrator, and run this command line:

imagex /capture source_path destination_imageimage name” /compress none /verify

  • source_path is the directory path to be imaged e.g. C:\
  • destination_image is the path and filename of the WIM file to be created
  • image_name is a text string that gets saved in the WIM file to say what it is
  • /compress is an optional switch to specify compression. Turning it off will speed things up
  • /verify is for some sort of integrity checking
  • Note that the /capture switch requires all three of the parameters specified in italics above.

This got me a WIM file in about 38 minutes which is quite reasonable for about the same number of GB. The actual WIM file itself is only 21 GB which is interesting considering compression was turned off. Windows automatically excludes a few paths but it does bring up the idea that the VHD file could be compacted, but I can’t be bothered doing this. Also WIMs support Single Instance Storage which is probably not operating in VHDs, this could also reduce storage.

I then booted up the target in PE and performed the steps similar to VHD imaging, running a script through Diskpart to create the partitions the same way, then rebooted to get the proper drive letters assigned to the disks. Back in PE, I had to copy ImageX to my network deployment file share as it is not included in a standard Windows PE boot image. It is deployed in WAIK and since my virtual server mentioned above is also a deployment server (it has WAIK installed on it) I logged onto that and copied ImageX to the deployment share. I then ran ImageX to apply the image to the target with this command line:

imagex /apply image_file image_number dest_path

  • image_file is the WIM file that contains the image
  • image_number is the number of the image in the WIM file (since WIMs can store multiple images). In this case 1
  • dest_path is where to apply the image to – in this case E:\

The apply process looked like it was only going to take about 15 minutes so I found something else to do while it was working. This is quite quick, as every other time I used ImageX it seemed to take a lot longer. Maybe I was trying to do backup captures with a lot more data. In this case, incidentally, the image is stored on a network share, so ImageX is doing pretty well considering it has to download it over the network connection, although to be fair the server is physically about 2 metres away from the target. There is a bit more distance in copper and 3 switches involved but all of those are in the same building and running at gigabit speed. As it happened the image was completely applied in 16 minutes which is a pretty good achievement.

Once you have finished running ImageX the next step is to run bcdboot in order to designate your deployment partition as the one that gets booted up. This comes about because Windows 7 (and Vista) use a separate system partition to boot the computer. The boot partition then starts the operating system from its own partition (the one the image got loaded onto). The command here is pretty simple:

bcdboot windows_path /s system_path

where windows_path is the Windows directory in the OS partition and system_path is the drive letter of the system partition.

We already used this command for our native VHD deployments in a command script so it looks much the same.

Once this is complete then try rebooting to see if Windows will start up. I found that indeed it booted up as expected. If your VHDs have been built already on that target platform then it is likely they will have all of the required drivers already incorporated. So you are unlikely to run into driver installation problems. As expected Windows has set the partition to C drive (which will occur irrespective of the drive letters that appear in Windows PE; in this case E:)

Therefore the likely scenario for us for laptop deployment is to convert the final deployment image into a WIM and deploy it using a modified version of our Native VHD deployment scenario. We will therefore keep the use of Native VHDs to two scenarios:

  • Directly imaging platforms where user data is not saved to the boot drive (such as student computers or other networked desktops)
  • Testing our laptop deployments only – the actual deployment will be physical.

I now have to decide whether to do a V2P on those laptops I sent out. Since we already partitioned the disks that would be a relatively simple ImageX step but I would have to make a backup of the VHD first. Both however are relatively straightforward, the main issue is how big their VHD now is since we copied all of their data into it.

V2P added to VHD image development does require that extra step so we hope that MS will develop a version of ImageX that works directly with VHD files – at the moment ImageX only works with WIM files. This would eliminate the VHD to WIM conversion step and therefore save more time as well as the need to perform that conversion each time the VHD gets changed, and the extra space needed to store the WIM files.

The overall lesson is not to be too bleeding edge, and to read all the documentation. If you were backing up your VHD regularly then there wouldn’t be a problem. But people don’t do this. Microsoft really does talk about Native VHD being a test scenario. I haven’t seen them support it as a production scenario. It is mostly something that lets you service VHDs without special servicing tools (in other words, in scenarios where you can’t service them except by booting up) and it is only supported on Enterprise and Ultimate editions of Windows 7. We will continue to use it for both virtual and physical deployments as a useful image development system, but the actual deployment will be physical where necessary.

Data Recovery (and native VHD data integrity issues)

There are a couple of important considerations with the use of native VHD based imaging. These are particularly necessary to think about when you are creating an imaging system that is based around the native VHD boot capability

Firstly, computer hibernation is not supported. Microsoft hasn’t particularly provided us with any reason why this is the case; they just state that that is so. This could be an issue with laptop computers.

Secondly, and this is the big one, all of your C drive is stored in that single file. In theory, this increases the risk of data loss from a disk if for some reason that single file is affected by errors on the physical hard disk.

Let me expand a bit on the second scenario. From time to time we see laptops that have a damaged hard disk. Even if the computer can’t boot Windows it is usually possible to recover most of the files on the disk simply by copying. It is rare to lose all of the data on the disk in a situation where there are only a few files that may be unreadable.

If you extrapolate this into the Native VHD scenario then if all your data is stored in a single file and that file becomes unreadable, then normal scenarios for copying files will not work. You will have to consider whether scenarios exist where you might be able to recover only part of the file and if there are any data recovery tools or systems readily available that can retrieve files from a damaged or partially recovered VHD file.

I have just about completed a deployment of laptops with Native VHDs and this scenario has somehow escaped me up to this point. It is very rare for me to have to deal with a situation where a hard disk fails to the point that data cannot be recovered off it, or it is unreadable by Windows (or Windows PE). Usually if there is a small amount of bad sectors I would boot to PE and then use Robocopy to copy as many files as possible onto a backup disk. If a few files got missed because of corruption we would look at recovery scenarios but this has never happened.

Imagine if a VHD file could not be copied in its entirety then you would have to make use of more sophisticated data recovery scenarios. There are some good Rescue CD options for this.

My idea for the moment is to take a look at V2P scenarios. I will post again later on once I have looked further into this scenario.

Monday 20 June 2011

Windows Thin PC Due For Release

If you have a subscription to the Volume Licensing Service Center, you may have noticed a MAK key for Windows Thin PC lately. WTPC is the replacement for Windows Fundamentals for Legacy PCs, and just as that was offered to NZ schools on the NZ MSA, it appears that WTPC will also be offered as part of SAB.

WTPC is a lite edition of Windows 7 and according to the FAQs it is based on Windows 7 Embedded. It will be downloadable through SA from 1 July. It would appear at this time that it could support older PCs with 512 MB of RAM and be able to be used primarily as a terminal server client. Basically this would be an alternative to installing Windows XP on those old machines. However we will have to see if a machine that doesn’t have its own Windows 7 driver could run WTPC successfully because such machines don’t have the WDDM driver that is recommended.

We do have some older PCs in classrooms that aren’t able to run Windows 7 so I expect this will be tried out to see what we can do with them in due course.

Wednesday 15 June 2011

What works in Windows 7 imaging and what doesn’t

With all of the experience I have amassed in Windows 7 imaging this year and with our experiences of different technologies it is becoming abundantly clear what works and what doesn’t, for the number of computers that we have.

  • What doesn’t:
    • MDT. Although this has a great deal of promise it is too complex for an organisation of our size. In order to get the best out of MDT it is necessary to spend a lot of effort setting and maintaining MDT itself, as well as scripting the installations for different platforms. This includes scripting the automatic installation of third party applications that don’t always work or aren’t as well documented or supported.
    • Deploying images that have been sysprepped from installation on one particular hardware platform. Our experience has been that it only takes small changes in a target hardware configuration for Windows to refuse to complete installation from an image that has been sysprepped, regardless of which drivers are involved.

The second point is a particularly troubling one. Our experience with Windows 7 is that it can refuse to complete installation from an image if the hardware is slightly different even where the drivers are not boot critical. This appears to be a fatal flaw in Windows 7 that should not even be an issue. If a device is not boot critical then it should be disabled and the installation continued allowing the affected device to be set up at a later time.

The second point also has an impact on MDT since it is sysprepping an image before it is captured. It may be that MDT’s driver injection process can work around this limitation but not having been aware of it at the time and not being a user of MDT now I can’t say if that is the case.

Along with the built in sysprep limit, this has caused me to review our imaging strategy as below.

  • What does:
    • Native VHD imaging. A great technology that makes it easy to deploy and back up images on target platforms simply by file copying procedures.
    • Deploying pre-sysprepped images to a target and sysprepping only that target as the final setup step. DISM is used only if there is a problem with a boot critical driver.

We have basically proved with our imaging experience to date that a pre-sysprepped image can be deployed to a relatively wide range of hardware. If there is a boot critical problem with such an image it is a relatively easy process to service the image with DISM and inject the necessary driver to the image to get it to boot on the target. Pre sysprepped images do not have the same problem with missing or incorrect drivers that sysprepped images do unless the driver is boot critical, such as for a hard disk or disk interface.

Because all images are only sysprepped as the final deployment on that computer, there is not going to be a problem of drivers providing that the pre sysprep image contains all the correct drivers for the target.

There will be fewer generations of images to store and thus a saving in disk storage space.

There will also not need to be as many different images for different hardware platforms especially those that are not used very often.

Thursday 2 June 2011

VHD Resize [2]

A couple of posts back I wrote about getting a native VHD down to a smaller size to use on a system with a smaller HDD than what it was originally built on. That time I just used standard defrag along with shrinking in Disk Management. This time around that wasn’t going to get the VHD small enough to fit on an 80 GB HDD so I had to try another option. Helpfully Defrag will put information into the event log about immovable files. Using that I worked out I needed to boot the VHD and turn off System Restore in the OS, then mount the VHD in another OS and defrag it again, then shrink again. This time, success!!!! The VHD shrunk massively to 20 GB. After considering the further options, I then expanded it again, up to 50 GB.

After doing this you still need to change it from a dynamic 127 GB file (in this case) to something smaller. Even though it only has a 50 GB partition in it, Windows 7 will still try to expand it to the full 127 GB at startup, which of course fails. So VHDResizer is the next step to get it to, in this case, a dynamic 50 GB file.

There are still a few things I don’t understand, like why a VHD that physically only needs 32 GB of disk space can’t be shrunk in partition size below 72 GB. I presume that the dynamic format compresses zeros or blank space, but if it can manage this then it should be possible to defrag that blank space as well, instead we get the fiction that the unmovable file can’t be shifted, yet we know it is possible with third party tools.

The next little hassle with the target was to get it to boot Windows PE. When I fed it the pen drive, it spat it out. So then I had to spend a lot of time creating a Windows PE 3.0 boot CD, and that is like chalk and cheese compared to the pen drive; it displays the old Vista progress bar instead of the Windows 7 startup animation, and takes a lot longer, with much more blank screen, to get to the command prompt window. From there, after following the standard amount of initialisation of the disk and copying the VHD, it was a surprisingly short step to booting up on the VHD and getting this unsysprepped image to start.

Tuesday 17 May 2011

Messy Imaging

There are some scenarios that DISM doesn’t work in as you would expect. When I set up a reference image for a new platform, because of my imaging topology I have images that are pre-sysprep, post-sysprep and post-dism. Generally I would choose a pre-sysprep image, but then it has to be mucked around with and sysprepped before it is ready to use. If I only had one laptop, maybe it would seem simpler to use the post-sysprep image?

Well, it seems not all images are equal. We got an Elitebook 8540 laptop, it’s the only one the school will ever have. So I thought that a post-sysprepped image with the use of DISM to give it the right drivers to start up would work. I did something similar last week with a 6710b and used the pre-sysprep image because it would need to be cloned. The problem is that for my 8540p, the post-sp image doesn’t work. It bluescreens before the Windows 7 animated moving pieces of logo have even finished circling. But if I use a pre-sp image exactly the same way, there is no problem.

So it appears there is a good rationale for having three image generations – apart from the sysprep limit that forces us to have pre-sp images in the first place. Perhaps there is not a need for a separate post-dism generation (the dism stage is done on the post-sp image to prepare it for final deployment). At the moment the process for readapting an image to a new platform is to copy the pre-sp image from another platform, dism it to remove old drivers and add new ones, and move the serviced image to post-dism for deployment prototyping. Once the image has been prototyped on the target platform there is then a series of pre-sp, post-sp and post-dism stages for the reference image for the new platform. In this case some of those steps might be skipped.

So it would appear there is some difference between an image that has been sysprepped and one that has not when it comes to servicing the image offline so that it can be reused on a different platform.

Wednesday 11 May 2011

Resetting a bootable VHD for another hardware architecture

Today I tried putting a laptop bootable VHD image onto a different laptop. I had tried this before and it wouldn’t boot. There are two problems actually:

  • The VHD file when expanded to its full size (126 GB) which happens at boot time, was then too big to fit on the available space of the HDD. Microsoft has a new Blue Screen of Death number (0x135 I think) that comes up when this happens.
  • The VHD had to be serviced with driver files for the different laptop so it would actually be able to boot up Windows on this laptop.

To solve the resize problem is quite involved. The VHD has to be attached to a virtual machine, defragged and then shrunk in that VM. In this case defrag didn’t itself achieve anything beyond what shrink itself could do, but the latter did bring the size down to 72 GB which should be acceptable.

This page here gives one methodology with the steps you can use. I simplified this somewhat and just shrunk the partition in Disk Management of Computer Management. I then moved the VHD back to the server it was originally on and then ran a tool mentioned on that page called VHDResizer to size the VHD to the partition size. This basically works by creating a new VHD file and copying the existing one to it. The web page as mentioned above probably allows much greater levels of resize but it already took so long to get the VHD defragged and so on that I cut out some of the steps in the web page and just lived with the VHD being larger than desirable.

The next step is DISM using in this case the Chipset drivers for this laptop which are the ones that contain disk drivers, finally the disk got copied to the laptop and rebooted, which worked. Finally!

Sunday 1 May 2011

Don’t get burned by Trademe’s restrictive shipping fees policy

I’ve operated a personal Trademe account for a couple of years. I only traded a small amount of stuff and haven’t had to ship anything that I can remember. Earlier this year I opened a work account and started to sell surplus computer equipment that we had had, which has brought in a small but useful amount of money for us. However, in doing this I have run into points of disagreement because Trademe seeks to regulate and restrict shipping costs in a way that is, in my view, completely at odds with the rest of the way their site operates. The auction process is a form of arbitration or negotiation. In effect the final sale price of goods is arbitrary and arbitrated in a way that is only lightly regulated in a system like Trademe’s – as it should be.

But when you get to shipping charges, Trademe has this clause which says “You may only charge reasonable amounts for shipping”. If you ask for that to be broken down, it comes to “You may only charge for the physical cost of packing” and “You may only charge for the amount you pay the shipper” (for example the fee for ParcelPost or a courier”. In actuality, when shipping goods, you are likely to incur additional costs. In our case, I would expect it to take me about half an hour to pack the goods, and there is likely to be fuel cost for me to deliver the package to a post shop. There is no logical reason why a shipping fee can’t include these costs. I think that most people would feel that these would be “reasonable”.

Every time that I have spoken to Trademe before to ask a question about how things work, they have always given me an explanation. This is the exception. They have not explained why they have this draconian restrictive shipping fees policy, nor are they open to having their policy challenged. It may be argued that the extra costs should be factored into the price of the goods themselves. This results in a higher price for buyers who are coming in to pick up goods. You could give a discount but this is possibly contrary to Trademe conditions as well as, like many things on their site, discouraged by financial penalty. It may be OK for a business to operate this way but many small traders are doing this part time or as a hobby, do not have a shipping department at their disposal to dispatch goods in the most efficient way, and therefore are doing it themselves and will incur the extra costs. Every other business that is doing mail order can charge the shipping they wish and the parties to trade can negotiate directly on shipping charges.

The only reason I could see for this policy is to push up the sale price and therefore, Trademe’s commission on the sale. As I have noted, in real life the cost of shipping, outside of doing business through Trademe, is one of those things that people can always negotiate. People are questioning the charges and fees that Trademe imposes, some of which are ridiculous (a fee to put a reserve on goods, a fee to change the closing time of the auction etc). Therefore there is room to question other parts of how Trademe operates and the possibility they might be abusing a dominant market position. The major competitor to Trademe is the Sella site which has managed to mop up Zillion and Sellmefree among others. I expect their popularity is far less but they have the correct policy on shipping fees, that it is by negotiation. I have had my fingers burned twice by Trademe over the shipping fees and they won both times because of the restrictive rules and (excessive) penalties they impose. I also believe strongly that there are traders who refuse to ship goods because of Trademe’s policy, and that other traders are interpreting “reasonable” as I would interpret it and as most “reasonable” people would interpret it.

Given the exorbitant price that Trademe was sold to Fairfax for (some $750 mlllion – who would pay that for a simple website) they are a “big business” and their policies and charges need to come under scrutiny as is the case for all big businesses.

Tuesday 5 April 2011

Google targets enterprise with Chrome MSI

Last time I wrote about the difficulties we had experienced with the standalone Google Chrome installer that is supposed to install for all users. Although I was not able to get support in the Chrome forum, I have discovered today that Google is producing Chrome in an enterprise edition. As the article I was reading noted, Firefox has traditionally been quite weak in this area. When we deployed Windows 7 to our suite PCs, we put the beta FF4 onto them. It wasn’t long before users let us know that they couldn’t get out to the internet with FF – for what reason I don’t know, but we quickly ditched the image and loaded a new one with Chrome instead. The standalone installer worked on that occasion but on numerous others since I haven’t been able to get it to do a full install. So the Chrome MSI and other enterprise offerings look as though they will address this problem.

The main other part of the enterprise package is Group Policy templates which let sysadmins preconfigure settings for users and computers on the network. Overall I would say this looks like a much better outcome than Chrome was looking like so far; Google has certainly put some effort into beating IE at its own game.

Google has also addressed the operation of Google Update in an enterprise environment with a policy template. Of course, if you prefer not to use Group Policy, you can directly enter settings into the registry for the keys specified.

Google Sketchup is an application that doesn’t use Google Update to control its update settings. However, these settings can be controlled directly using the following registry keys:

  • HKEY_CURRENT_USER\Software\Google\SketchUp7\Preferences – change the value of CheckForUpdates to 0 to disable the check.
  • HKEY_CURRENT_USER\Software\Google\SketchUp7\WelcomeDialog – change ShowOnStartup to 0 to stop the welcome message (which often says “Update to the next edition”).
  • One of the reasons you get a welcome message is to get the user to select a template. You can set the path for the default template by writing the DefaultTemplate value in HKEY_CURRENT_USER\Software\Google\SketchUp7\Preferences. If you are using a login script or Group Policy Preferences to automate this, you will need to determine in some way where the templates directory is, since this will vary across 32 and 64 bit systems, and for different editions of Sketchup. Unfortunately there is not an easy way of reading the location from the Registry, just as there is not a way of determining which editions of Sketchup are installed on a local computer.

Monday 21 March 2011

64 bit Native VHD Deployment [3]

I have just completed making our laptop image (also 64 bit) and that is to be tested shortly. I used MDT to make the previous deploy but now I am switching to native VHD as well. It looks like the laptop VHD is OK and there are no problems. So this means we can ditch MDT altogether and just use native VHD for everything (as we don’t use MDT for XP) except older Ghost stuff (XP). MDT is much more complex. It has its place if you do a lot of imaging and want to have fewer images with customisations to various types of hardware handled automatically. If you have only a few images and just want a simpler deployment method then nVHD is just an update on Windows AIK that means your deployment is just a simple file copy of the VHD file instead of applying a WIM. So updating the image is a much simpler process overall of just replacing the VHD file with the new one. At some future point I will detail how to do this practically as we do it.

The laptop image was deployed successfully although there seem to be issues with x64 Sysprep with the answer file because that wizard that comes up at first boot has six pages now instead of three. In the process of testing the laptop image I updated it to add the recently released SP1 of Windows 7 and had to install an additional software package. The deployment of the computer suite image has also been successful and it also is being updated to SP1 at the moment along with some minor tweaks.

When we transfer staff from their old laptop to the new one we make a complete image of the hard disk of the old laptop as a backup and then extract their personal files (My Documents etc) from the image. In the past I have used Ghost for this. However these days it is just easier to use the Disk2VHD tool released by the Sysinternals group of Microsoft to make a VHD backup of the HDD instead, which is done online by running it from within Windows. Then there are various ways to open the VHD. My preference is to use the latest version of 7-Zip which can open all sorts of useful formats like VHD, WIM and ISO among others, and it is freeware.

Tuesday 15 March 2011

64 bit Native VHD deployment [2], etc

Having completed the deployment of computers for student use the next task is to create the image for staff laptops. At this point we need a boot media for Windows PE x64 and traditionally I would have created a boot CD. But the Windows PE walkthroughs create one that seems to be a waste of time, really slow to boot, apparently running an older version of PE etc etc – so I have decided not to bother any more. The Probook 6550 card reader seems to be unable to boot my SD card so I set up a pen drive as a boot device and it copes with that OK. Pen drives are so much easier to set up than a bootable CD which has to be configured with extra steps to make it bootable and create an ISO file. You don’t need to do this with a UFD, it’s just a matter of copying files to the device after formatting it and away it goes. I have customised Windows PE by using startnet.cmd to start a custom script that maps up a network drive to an installation share which contains other scripts used to perform various tasks related to native VHD boot, which simplifies things a lot.

For the Probook 6550 I have run the HP applications installer over my basic installation VHD and am just about ready to sysprep the image – the next stage after that is driver injection with DISM. Then it will be ready to deploy.

We are now back having reopened yesterday. One of the things we had completed before the earthquake was the installation of Enable’s fibre connection. At the moment we are not switching to it but this might happen in the next few months. The biggest and best is a project to lay fibre between two of our sites. If this happens we can consolidate our servers with one main server and one backup. The current link between sites is using “54 Mbps” wireless (real speed around 15 Mbps) which is very slow and increasingly inadequate when transferring large amounts of data and it makes it necessary to maintain two full spec servers in order to reduce the amount of traffic on the wireless link or to give users a reasonable speed experience.

As I continue to explore the world of AIK, I discovered the Volume Activation Management Tool yesterday. The VAMT is primarily of benefit to those of us who wish to manage MAK license keys, as opposed to the use of KMS. It has been a lot more straightforward to us at the moment to use MAKs because the KMS hosts we set up don’t seem to be getting any activation requests and I haven’t got time to check this out. The VAMT lets me remotely activate computers that require a MAK which I guess would be for both Windows 7 and Office 2010. This means they don’t need to be manually activated.

Also part of the AIK is DISM which I previously mentioned. This works very well most of the time. The main problems being when it can’t unmount a WIM, this in fact happens too often in my view so I hope MS will address the many times as when it refuses to unmount then a reboot is the only option left.

Windows 7 SP1 has been released. There have been some issues which people have discovered when installing it. This seems to be mostly from using WSUS. Unfortunately DISM can’t be used to apply it offline. I wonder if this is yet another example of MS’s stripped support model that seems to be developing for everyone who isn’t using Azure or other cloud services. Everything that MS does these days seems to be geared towards pushing people onto Azure et al, everyone else gets a shrinking level of support on their stuff.

Monday 7 March 2011

64 bit Native VHD deployment [1]

This is my first post since the big earthquake hit Christchurch on 22nd February. I’m not going to write here on the earthquake itself except to say that we have been closed for the past 14 days and will reopen next week. At the moment I am just picking up where things have been 14 days ago. I was just finishing the preparation of a 64 bit VHD ready for our computer suite. Since our 32 bit VHD deployment was, from a technical viewpoint, successful, I have decided to move things along and use native VHD more as a deployment system. Hence I have started to produce 64 bit native VHDs for our two major deployment imaging requirements: staff laptops and student computers. The idea is that the computer that is running Windows AIK tools that are used to service these images can be running 64 bit Windows OS (in this case Server 2008 R2) and so I don’t need to maintain a 32 bit OS VM to service 32 bit images as is the requirement when using x86 images. Therefore I can store the images on the network shares of the server itself without having to copy them to the 32 bit VM to use DISM on a local disk as it won’t work on a network drive.

The second reason for building new images is to get around the Sysprep Rearm limit as detailed in my last posting. The 32 bit laptop images in particular were subject to this limit because they had been sysprepped multiple times. Therefore a new 64 bit image has to be built from scratch which is copied before it is sysprepped. My first attempt to build a VHD attached to a Hyper-V VM failed rather spectacularly when it came to the point where it had to be loaded onto the target platform (to install platform specific applications). It got partway into the boot screen (the point where the Windows 7 logo gets drawn from various moving parts) and then bluescreened. I poked around trying to debug what was happening but couldn’t work it out even though a Temp folder is put onto the boot drive which appears to contain information to help tracking down the problem. I think the issue is that Native VHD is very specialised and as yet not well enough documented beyond the MS group that created it.

So I have started again with building images, this time from scratch on each of the target platforms. The starting point in each case being a generic VHD that had been configured to be dynamically expandable to 127 GB maximum size and loaded with the Windows 7 x64 Ent installation files on it. When the target gets booted to this VHD for the first time it will set up Windows on it. It is not sysprepped, it just acts like a first time Windows 7 installation for any target platform.

The 64 bit image for our computer suite has been successfully test deployed and so far it looks good. A trial deployment will follow shortly in along with preparing to reopen our computer suite along with the rest of the school. The laptop image will be completed this week as we have to get the new laptops ready (the completion of this was deferred due to the earthquake). The tricky thing however about Windows PE is that I will need to create 64 bit boot media or use a MDT 64 bit boot CD that I already have. So my SD card will have to be updated to a 64 bit edition of Windows PE, including net card driver injection. This is mainly because BCDBOOT comes in 32 bit and 64 bit editions and therefore the 32 bit Windows PE image can’t run BCDBOOT which is needed to set up the BCD when creating a new system from scratch.

Monday 14 February 2011

The nasty Sysprep Rearm 3-step limit and working around it.

If you haven’t heard of Rearm (or SkipRearm before) then this is a tricky little thing that Microsoft has introduced into Windows Vista and Windows 7, as part of the more complex activation thingys. And it is an annoying thing. Any one image can only be sysprepped three times before you get this thing kicking in and telling you that Sysprep had a fatal error with the Rearm component. I got this today when I took what was supposed to be my do-all, good-for-everything 64 bit 7 Enterprise image. Last week I copied it to a laptop, installed the laptop-specific stuff on it, then put it back onto the VM, and today I tried to sysprep it, and got this error.

The simple answer: don’t sysprep more than you have to. Keep a master image that hasn’t been sysprepped, and make all the changes you need to it. Then make a copy of it, and sysprep that copy. Whenever you need to make changes, go back to your master that hasn’t been sysprepped, make the changes on that, copy it, sysprep it, deploy it. It just means extra work copying files around.

I suspect this is the reason that MDT Sysprep stopped working the last time I tried to use it. The imager VM had the same image on it that it always had, and it had been sysprepped a few times. Enough to hit that limit. Even though I haven’t been able to find out what error was thrown, it would have been the same count as the sysprep I tried to do today, and so it failed.

As you may recall, one of the criticisms I had of MDT and AIK was the need to keep a 32 bit VM running to allow 32 bit installations to be serviced using these tools. Well, that won’t be an issue for much longer. I think we’re about to bite the bullet and go to 64 bit Windows on everything. And then the AIK and MDT can be installed on a 64 bit server and there won’t be any issue for them. What in fact I will do is install those on my server now and set up new 64-bit-only MDT/AIK shares that they service.

But for right now, for getting those laptops out, what I am going to do? Well after some thought I decided to start building my image again from scratch, this time making the necessary copies of the VHD so that it can be done properly this time. Another thing I did today is to run the Office 2010 OCT to make an installation that automatically puts the activation key in, so we don’t have to bother about the KMS to get things installed. So pretty soon I am going to be building a new image for our computer suite that is 64 bit and then they will all be changed over to 64 bit the next time they need reimaging.

Friday 11 February 2011

Don’t use 32 bit Windows with more than 3 GB of memory

Well the subject of this post is that everyone should be getting used to 64 bit operating systems and migrating to them. Today I saw a laptop that had 4 GB of RAM, the very latest HP that we can lease for a school in the TELA scheme. With the 32 bit edition of Windows 7 it reports it has 2.92 GB of RAM available. Now I thought this was a bit low as I knew more memory is reported for the 64 bit edition of Windows 7. I was quite shocked to discover that the latter figure is 3.8 GB. In other words a difference of around 900 MB. This is quite a significant difference.

Since as far as I know these TELA scheme HP Laptops are only available with the 32 bit edition that has been built up for the Tela scheme, we will definitely be continuing to customise our laptops with the 64 bit edition of Windows 7. We are currently looking at the best way to deploy these. As previously described in this blog I spent a lot of time learning to use MDT and while it is a very sound method it is technically complex and requires a significant amount of specialised resources to maintain. Therefore I am having a look at Native VHD deployment as a means to achieve what we want, which will be along with the use of WSIM and DISM as described in a recent series of articles. At the moment I am as a first stage loading a VHD to one of the actual laptops in order to customise it with hardware-locked applications that are normally installed by a MDT task sequence. After that it will go back onto the imaging VM for tweaking and then the sysprep for deployment. The way of doing backups with VHDs (instead of MDT based capture) is to use Microsoft’s Disk2VHD tool to capture a hard disk into a VHD file. So we could almost dispense with MDT altogether.

After the first week back at school our nVHD deployment of 30 computers in our suite has been generally successful. The main issue to date is getting Windows and Office activated; as we put in a key for Windows it still has to be manually activated, while Office should be activating against a KMS server but isn’t. I can switch Windows in future back to KMS but we still have to work out why the Office KMS server (which is a different server from the one that handles Windows 7 activations) isn’t receiving activation requests. As it happens Office will not stop working but it will just keep on warning it needs to activate so we have a bit of time to sort out that problem.

Another big task is creating 300 accounts for all our students who have individual logon accounts. We used our previous SMS to export a CSV that was hacked in Excel and then fed into a VBscript to use ADSI to set up the accounts. This year with Musac I have created a separate Access database to handle this task and that of creating the output CSV file to feed into the script. Naturally I have decided some enhancements are possible. For example with Outlook Live having a Powershell interface it should be possible to create email accounts at the same time. Further capability will be added later. The main change is that the creation of the accounts will be automated with the names of students being used and, to get around the 20 character length limit for the sAMAccountName field, the UPN name field will be set with a 3 character UPN suffix, thus each logon will be xxx.yyyyyy@zzz which is standardised for all logons. This will be the first time we have set up and distributed accounts with the expectation for using a UPN name.

It is important to note when using UPN names for accounts that Windows still sets the %USERNAME% environment variable for the user to the value of sAMAccountName. When you create home folders for your users you need to allow that the sAMAccountName is the one that is relevant to Group Policy Folder Redirection for %username% and that this still needs to be unique even if it is truncated to 20 characters for users with longer names stored in the UPN. Log in, look at the environment variables and see how many uses there are made of that sAMAccountName value. Also all our new users are getting their home drive changed to O: because a lot of computers have extra drive letters with the additional partitions for Windows 7 and nVHD as well as card readers and the like.

Sunday 6 February 2011

Deploying Native Boot VHDs [6]

We have successfully completed our first ever mass deployment of computers using the Native Boot VHD technology of Windows 7. This article carries additional detail of the deployment phase of this project.

The average transfer rate for the 15 GB VHD over the network was around 45-55 MB per minute equating to around 4-5 hours to complete the copying. The machines then had to have several more scripts run to complete the initial setup. In future updating the machines will only require the VHD file to be replaced. After rebooting the Windows Setup Wizard will appear and ask you to provide some settings such as the computer name and a username. It automatically restarts the computer again and brings up first time login. After this the only remaining step is to join the domain, this may be automated in future.

Both of the main activation tasks, the Windows 7 and Office 2010 activations can be handled by setting up a KMS server. This has not yet been done but it is one of the next tasks, being quite straightforward, and relieves you of the need to manually activate, and in Office’s case enter the product key as well.

Although the copying stage of the VHD may seem slow it is roughly equivalent to Ghost without the network congestion that multicasting produces. My experience of simultaneously ghosting multiple machines is that it rarely meets expectations, in that it is typical to require several hours or more and thus produce no real time saving at all.

Saturday 5 February 2011

D-Day

D-Day is Deployment Deadline Day. D-Day is also the day where I have had a record amount of trouble with my work computer, which has had to be restarted twice so far, firstly when Explorer crashed and wouldn’t restart, and then the second time Windows Live Writer wouldn’t load.

The process of getting from a reference image to deploying it to target machines is somewhat multi faceted. The process we use goes something like this:

  • The VM is shut down
  • A pre-sysprep backup of the VHD is made
  • The VM is booted
  • The VM is sysprepped (which can only be done live). At the end it shuts down again automatically.
  • A post-sysprep backup of the VHD is made.
  • The VHD is copied to another VM for offline servicing.
  • DISM is used to inject the drivers in offline servicing.
  • The VHD is copied to a network share to be deployed.

Bare metal deployment requires numerous steps and I have been in the process of creating and testing command or Diskpart scripts to automate as much as possible. A spare SD Card with Windows PE, scripts and tools loaded on it is set up to boot. Press the right key and the hardware boot menu comes up, then select the SD Card to boot from it. First task was a simple Diskpart script to create the three partitions needed on the HDD. So what was done on Wednesday afternoon was to boot every new PC to this card and run this script against Diskpart to initialise and format the disk. Then we had two days off for our annual staff retreat and I came in to work again today, Saturday.

Looking at the rest of what was needed I got started with a script that mapped a network driver letter to the share on the file server to allow the VHD to be copied over the network instead of an external HDD. This is all fine but the problem that showed up is that network drivers for this particular hardware platform are not included in Windows PE. Some time ago I wrote about learning how to inject drivers into Windows PE 2.0. As this is PE 3.0 the process is slightly different and is basically the same as injecting them into a Windows boot image: using DISM, first step is to use DISM to mount the boot.wim file (you don’t need to use ImageX to do this anymore). But as we don’t need or want the 1.4 GB of images that the VHD got, I just gave it the path to the network drivers folder and it injected two drivers only taking about 4 MB. After all changes are completed use DISM to commit changes to the WIM. Then this was copied over the existing boot.wim folder on the SD card and away we went. The Net Use command can be scripted to specify both username and password in the command line so that has been handled OK.

My next act was to attempt to fully script the process of running BCDBoot to copy the boot files and set up the BCD in the system partitition. This has run into snags because as soon as I tried to get Diskpart to set up the partition it threw various errors. In the script I got it to attach the VHD and assign it a drive letter. It objected to the select vol that it was given and apparently therefore could not assign the drive letter or do any of the other commands except detach the VDisk at the end. But when I ran diskpart with the script that attaches and assigns, the volume was there as expected at the end so I don’t see why all the other errors occurred. At this point I decided it was just easier to put all the scripts and tools onto Y drive (the network install share I had set up earlier) than mucking around with the SD card all the time, and this lets me make some boot CDs as well to speed things up for deployment.

Also in the deployment tools documentation for Windows 7 AIK is detailed instructions on how to install RE and customise it to specific requirements. A future project is to add the RE to the system partition so that a boot media isn’t needed to boot up the machines for reinstallation in future. I’ll look into it sometime when I get some free time.

Basically at this point all the machines are copying the image all at once and network performance seems to be extremely slow. This means the machines will probably take half the night to complete copying the 15 GB file. I’ll come back in the morning to finish working on them.

File and registry virtualisation in Windows 7 and Vista

File and registry virtualisation in Windows 7 and Vista is a new technology that deals with the requirement of some applications to be able to write data in locations where the user does not have access permissions, as well as the user inadvertently choosing these locations. The latter functionality in particular can trip you or your users up and perhaps it is better to educate your users about the limitations of saving their data into “system reserved” locations.

For example C:\Program Files (or the equivalent for different boot drive letters) is such a location and you may find (as I did this week) that attempting to save data files from an application into this directory will result in the file “disappearing” or “appearing”. In this case I found I could see a view of the file only in the Save dialog of the exact application that created it. Explorer wouldn’t display the file at all.

After looking into this I found a resource on MSDN that details this process. In this case the file is being written into a subdirectory of C:\Users\<username>\AppData\Local\VirtualStore and actually did exist in that location when I checked. The point of what was in its original location is that it looked like a file, not a shortcut to a file that was stored somewhere else.

These technologies appear to be targeted at legacy applications in that they only apply to 32 bit apps, not 64 bit for example. As such they fit in with applications that wouldn’t meet the current Windows Logo cert requirements.

Friday 28 January 2011

Migrating to Windows 7 from XP with OS Specific Desktop Policy

This article is mainly focused on the issues which come with the migration for the user experience, as opposed to the task of getting new hardware or upgrading older hardware and installing the operating system on it. There are challenges which come with differences in the operating systems and the configuration settings that apply to them. This is especially true if you are using Group Policy or Registry settings to lock down desktops for certain classes of users, as would be quite common in a school environment with students for example.

Whilst it is tempting to think some of the differences can be dealt with by using loopback policy to apply OS specific settings only to machines which have a specific OS, this brings other complications, mainly when a user with administrative rights logs on to a computer and finds that the same lockdowns are applied to their account. I would highly recommend from experience that loopback policy for desktop experience settings is only used as a stopgap measure, until such a time as all of your computers, or as much of them as possible, can be transitioned to as much commonality in these settings as possible, so that the majority of them can be taken out of loopback policies and put back into the per-user section of the policy tree where they don’t affect all your users the same.

For example, we used to redirect the Start menu in Windows XP so that it displayed a pre configured set of icons. In fact, it was redirected to the All Users Start Menu, which in turn was configured with exactly the set of icons we wanted the user to have access to. This immediately causes problems in Windows 7 because the equivalent is stored in a physically different path. I decided ultimately to stop using this redirection completely in Windows 7. This means at least for the moment that I have to find a way of differentiating between Windows 7 and Windows XP computers. In the short term I can immediately do this using a loopback policy. However that will also limit me if I log on to one of these computers as an administrative user because that policy ends up being applied to every user the same. In the longer term therefore I must be able to take as many as possible of the policy settings out of loopback and into a user specific policy.

If I have a browse through my Windows 7 policy file then there aren’t that many settings that are Windows 7 specific. So it wouldn’t really take long to weed out the ones that only apply to 7 and loopback only these specific settings, with everything else into a user specific policy, and then make life simpler for people administering these computers. I wouldn’t say it will totally fix the problems and it is possible we would look at other means of making these computers administrable. One option is to set up local administrative accounts as these are not subjected to the limitations of group policy.