ConfigMgr 2012 Query – All Domain Microsoft Windows Systems with Client and Endpoint Protection Installed.

ConfigMgr 2012 Query – All Microsoft SQL 2012 Systems on Domain with Client Installed.

Lenovo IdeaPad Yoga 13 Wireless Adapter Woes.

I’ve raved on about my Lenovo Yoga 13 – I’ve also recommended it to several colleagues and friends, and why not, it’s been my daily driver for over a year now and is of great design and build quality. Coupled with the ability to add an extra mSATA SSD and upgrade to 8GB of RAM what’s not to love.

Well, the wireless adapter, that’s what.

The laptop unfortunately uses a Realtek piece of junk (Realtek RTL8723A Wireless LAN 802.11n USB 2.0 Network Adapter) for providing wireless capabilities to the device. The adapter is actually built into an internal USB2.0 port inside the laptop, not the more commonly used PCI-Express port. I’ve recently been noticing that the wireless signal has been really poor. Also, since upgrading to Windows 8.1, I’ve been getting blue screens of death every time I wake the laptop from sleep.

Having tried just about every driver available from Microsoft and Lenovo themselves, I decided to send it back for repair, and Lenovo promised me they would swap the adapter for a different brand. I got my system back after around 10 days and found they had done nothing of the like, and simply installed a different driver in Windows. Incredibly annoying, I’ve done that myself a hundred times over.

Anyway, one thing I did notice was that the driver does not appear to be one that has been publicly available. Also, I’m pretty sure all the drivers I tried before sending my Yoga 13 back to Lenovo were labelled as┬áRTL8723AU. So I’ve uploaded it here for you to download. I’m monitoring the performance of the driver, and so far I have had no blue screens, and my wireless network is now reporting 150mbps instead of the 75mbps I was getting with all the previous drivers.

Driver details and download:

Version: 1021.6.118.2013
Date: 18/01/2013
Description: Realtek RTL8723A Wireless LAN 802.11n USB 2.0 Network Adapter

Realtek RTL8723A Wireless LAN 802.11n USB 2.0 Network Adapter

For The Perfectionist: Correcting the Case of Server Names in the ConfigMgr 2012 Console.

ConfigMgr is a great product, I’m sure if you are here reading this you probably already know that. I look at the console a lot. Most customer engagements I attend involve some sort of debugging through the ConfigMgr console and in my lab it is a staple part of my testing.

I was recently asked by a customer why the ConfigMgr console displayed server names in the Site Systems (amongst others) window within the monitoring tab in mixed case – shown below.

Mixed case in the ConfigMgr console

Now, I don’t know why this is. I can only assume different wizards have different text formatting applied to them and as such, some UPPER the text and some simply leave the text as entered. I was asked to find a workaround.

After running a brief trace in SQL Server, and with the help of this FindMyData_String Stored Procedure, I was able to track the relevant text in the SQL database and adjust accordinlgy.

I must stress, this has only been tested in my lab and the customers lab and although no issues have been identified yet, it’s worth mentioning that this is a non-documented procedure and likely unsupported. That being said, if you look at the text stored in the DB, it refers to display text, so I’m pretty confident this is only used in the context of presenting display names in the SQL DB.

Anyhow, the text you want to change is here:

Please test this thoroughly before implementing in a live environment!

Slow Operating System Deployment with ConfigMgr 2012 R2 on ESX 5.5.

I just wanted to share this tip that I discovered while testing my ConfigMgr Operating System Deployments in my home lab.

I’m running a test domain in ESX and have a number of virtual machines operating pretty well. Part of this domain is my ConfigMgr 2012 R2 environment. I am in the middle of performing some Windows 8.1 deployment tests and noticed that when I try and deploy an OS to a test VM, it was extremely slow to load the initial WinPE boot image from WDS and also during the deployment of the OS image.

This was strange considering I am using a pretty high spec server and working within the confines of a vSwitch, so no other hardware should be involved in the routing of data. Plus, I’m running on an SSD drive so I was expecting fairly decent speeds. Well, that was not the case. It was taking about 10 minutes to load the boot image (150MB in size) and over an hour to deploy the 3GB image file.

After trawling the net for answers, nothing was working. The environment is patched to the latest level, including the post-R2 ConfigMgr hotfixes, specifically designed to patch the slow OSD deployment issue.

However, after swapping out the native Network Adapter (E1000E) on the PXE-enabled Distribution Point and the test workstation VM with the VMXNET3 adapter, my speeds improved dramatically – bringing the initial boot image deployment to under 1 minute and about 20 minutes for the full OS deployment!

Given the ridiculous speed improvement, now begins the task of swapping out all the VM NICs for VMXNET3 adapter.

My Whitebox NAS/SAN for 2014 Onwards.

As some of you may have read, I have recently been re-furbishing my home lab setup. If you haven’t, why don’t you check it out below.

In my previous Home Lab, a Dell PowerEdge 2950 III, I made use of local storage to provide datastores through to my ESX environment. This was fine and all, but it did not allow me to test any advanced features that are commonly associated with enterprise ESX environments, such as iSCSI, NFS, shared storage and replication etc. I also had a file server running 24/7 that was completely dedicated to storing my local media collection and managing downloads. Far from ideal.

I decided that along with building out my new Home Lab ESX server, I would build a new device that would be capable of managing my media collection and download tasks, as well as being a central storage SAN for my ESX environment. This would provide me with a single storage solution that would scale as I wanted to grow my Home Lab.

Before embarking on my build, I listed out my requirements.

  • The device must be capable of handling at least 6 disks without expansion cards.
  • It must be ‘server’ class hardware – I’m not running a mission critical enterprise system here, but some server class features I have come to love and would prefer to have them upfront, such as IPMI, Remote KVM and ECC memory support.
  • The device must be capable of accepting additional 1GigE network cards and perhaps 10GigE adapters for the future.
  • Rack-mountable. Not for everyone, but all my other kit is racked, so this should be the same.
  • Sufficient options for cooling. As with all my other kit, this device was going to be placed under the stairs with zero air flow.

After much time researching my options, I formulated the following list of parts, which I purchased.

Codegen 4u-500 Case

I have used this case before. It is a solid built, 4u server case which supports the mATX motherboard I was planning on using and has 3 x 5.25″ external expansion slots for which I wanted to use a 4 in 3 HDD hot-swap caddy.

Supermicro X9SCM-F mATX Motherboard

This motherboard fills all the requirements I was looking for in a server class motherboard. It has 6 on-board SATA ports (2 x SATAIII and 4 x SATAII), dual Intel 1GigE NICS, a bunch of PCI-Express expansion sockets and remote KVM/IPMI. As a side note, I actually used this same board in my Home Lab server – really happy with it.

Intel Pentium G620 CPU

I actually already had this CPU laying around from my last NAS server. I was originally planning on purchasing a lower end Ivy Bridge Xeon CPU as I didn’t require huge CPU resources, but to my surprise, despite being non-documented by Intel, this CPU fully supports ECC RAM when used with the chipset in the X9SCM motherboard. A great budget dual core CPU taking on server duties! Love it.

16GB Crucial ECC RAM

Pretty much the defacto for any server builds I produce. I went for ECC RAM in 16GB capacity as I was planing on trying out ZFS filesystems and from what I’ve read, the more the merrier with regard to how ZFS consumes memory.

Crucial M500 960GB SSD

I wanted a fast disk for my primary ESX datastore. I plan on using mechanical drives to provide bulk, slow storage when required.

Antec Earthwatts 390 watt PSU

I relatively low power, green(ish) power supply with all the required connectors.

IOCrest PCI-Express SATAIII(3G) Expansion Card

I ended up picking up a cheap expansion card in the end as I wanted to add some additional disks that pushed me over the 6 connector capacity of the X9SCM. I trawled the internet to find one supported natively by FreeBSD, and this seemed to be the one. It runs a Marvell9120 chipset which according to this fantastic and concise article, would fully support FreeBSD.

I also had some parts laying around from a previous server which I wanted to re-use.

  • HP NC360 dual port 1GigE network adapter
  • 4 x Seagate ST2000DL003 2TB 5900RPM Hard Disk Drives
  • 2 x WD WD5000AAKS 500GB 7200RPM Hard Disk Drives
  • 1 x 120mm Arctic Cooling F12 CO (Continuous Operation) fan
  • 2 x 80mm Arctic Cooling F8 fans

While waiting on the new parts to arrive I started researching what Operating System I was going to use to tie this system together. I looked at FreeNAS, NAS4Free, OpenFiler and OpenIndiana – all of which support iSCSI in some way, shape or form.

In the end, I settled on FreeNAS, primarily as the support community seemed much more ‘alive’ than the rest, and in my experience of using FreeBSD based systems, I figure I will need all the help I could get! Now this is not in any way a scientific decision. For instance, I have done no performance testing on any of the other Operating Systems. It may in fact pan out that I end up using a different OS in the future, but for now, FreeNAS seems to be a good choice.

Another bonus I discovered with FreeNAS (this is common across most BSD based OS’s I believe) is that I could off load my download and media services to a ‘Jail’ – an internal virtual machine like portion of the system that operates in its own context away from the actual OS of the NAS/SAN. This is quite handy indeed. FreeNAS also supports Plugins, but I had great difficultly in getting them to work properly, so decided to create a single Jail and manually install all of my apps.

So, with all the parts here, I got on with the build. As usual, the photos can be found below. Keep you eyes peeled for an updated article once I have finished the build and configuration and have connected it to my Home Lab.

1 2 3 6  Scroll to top