Monday, February 28, 2011

Hyper-V Windows 2008 Server Core setup

Why Server Core installation? Although it does not include the traditional full graphical user interface it provides several benefits that I appreciate very match.

Because this installation option installs only what is required to have a hyper-v server less maintenance is required:
- Less updates and reboots;
- Less disk space;
- Lower risk of bugs;
- Lower attack surface;

And of course, it is absolutelly free, Microsoft offers Hyper-V Server as a free download with no Windows Server license required. Anyone can download Hyper-V Server 2008 R2 from the Microsoft Downloads website.

After the installation you don't need to activate the Hyper-v role, the necessary components needed to create and manage Virtual Machines are already installed by default.

First of all, it is very important to have your server BIOS settings configured properly. Make sure that "Virtualization Technology" and "Execute Disable" are both set to Enabled. In my Dell PowerEdge 1950 I have found them in the "CPU Information" settings. If you receive the "The Virtual Machine could not be started because the hypervisor is not running" error probably you forgot to do the steps above.

The installation steps are straightforward, here you can find detailed instructions. As soon as you have the installation finished, you setup local administrator password and login

Pretty spartan, there are no Desktop and Start Menu, only command-line. But with the SCONFIG tool, you can easily set your system up, get it on the network so you can easily manage the server remotely. Navigation through SCONFIG's options is done by typing a number or letter representing the correct configuration or information option.

With SCONFIG you easy can set up your Hyper-V Server name, join to a domain or a workgroup, configure network settings, make windows update, etc. For details see Configuring a Server Core ....
So assuming that your network configured and the machine is properly joined to a domain, let's enable remote management. Enabling remote management means that you will be able to manage the server from your workstation using Microsoft Management Console(MMC). So once you are in SCONFIG you will need to press 4 and select options 1 and 2.
It will enable MMC Remote Management and Windows PowerShell. It should be made first because the "Allow Server Manager Remote Management" option requires that you have PowerShell enabled - so you cannot shuffle this ordering around to try and avoid the reboot. After the reboot select option 3.

Also you can enable the remote access to your hyper-v server's desktop. It's the option number 7 of the main menu of the SCONFIG. And of course don't forget to add a user that you are going to manage your Hyper-V server with to the Local Administrators group - option number 3 of the main menu.

That's it. Now you can head on to your workstation to manage you Hyper-V remotely. Pay attention that you need Windows 7 desktop to do it. Because the Remote Server Administration Toolkit that is used to manage Hyper-V does not available for previous Windows versions. So download it from the Microsoft site and install. After you have done this you will find new option in the Windows Features list(Control Panel->Programs->Turn Windows features on or off) - Remote Server Administration Tools.
Check Hyper-V Tools (under Role Administration Tools) and Server Manager under the top level list. Now in your Administrative Tools the Server Manager and Hyper-V Manager commands are available.
Actually the Server Manager console is comprehensive tool to manage the whole server: disks, services, users, etc. including Hyper-V Feature. For example, in this article you can read how you can remotely manage devices of your hyper-v server.
But the Hyper-V Manager can be run in its own console that is enough to create and manage virtual machines. Also you need to enable the use of WinRM to connect to your Hyper-V server. To do this run cmd as administrator and type in
winrm quickconfig
Say yes to a couple of prompts. Then type in
winrm set winrm/config/client @{TrustedHosts='RemoteComputerName'}
If you use a firewall configure it to allow remotely manage the servers storage. This command should help:
netsh advfirewall firewall set rule group="Remote Volume Management" new enable=yes

Now you can connect the manager to the Hyper-V server and create first virtual machine. With very straightforward wizard you easy can do it. Just run it from the right side "Actions" panel: New > Virtual Machine
I would recommend to change default Hyper-V folder to store hard disk file and virtual machine configuration file to well known for you folder because sometimes you will need to access some of these files on a file system. To do it use Hyper-V settings. It is better that each virtual machine will have its separated sub folder and the sub folder name will be the virtual machine's host name. For example "C:\Hyper-V\First-VM"
It is obligatory to check the "Store the virtual machine in a different location" option but don't change the path, it should be your default Hyper-V folder. Only in this case all information about this virtual machine will be under appropriate folder:
- Disk file "C:\Hyper-V\First-VM\First-VM.vhd"
- Virtual machine configuration "C:\Hyper-V\First-VM\Virtual Machines"
- Snapshots "C:\Hyper-V\First-VM\SnapShots"
Otherwise in my case this structure was spread across the C:\Hyper-V default folder. It seems that there is a bug in this wizard.
If you are not sure what size of RAM specify for now it does't matter, afterwards you easily can change this value.
To configure network you need to create the virtual network first. Read here for details. Make sure that you use the "Network Adapter". Do not use the "Legacy Network Adapter". The legacy adapter has a lot of emulation, which causes lots of CPU overhead.
This page is about creating a storage for your virtual machine, it is very simple. But unfortunately, the ease of use of the wizard masks some powerful features of Virtual Hard Disks within Hyper-V. As far as selecting the right disk type is a key decision here I want to cover some aspects of Hyper-V virtual hard disk creation and configuration.

There are two types of disk controllers that Hyper-V supports: SCSI and IDE. SCSI is a synthetic storage controller that uses Virtual Machine BUS (VMBUS) and allows to perform disk I/O without any emulation involved with reduced CPU overhead in a very high performance manner. It differs from the IDE that is en emulated device. This means there is a little bit of overhead in processing disk operations and SCSI controller should provide significantly better performance than the IDE. But it is not. Actually Hyper-V has a filter driver that reroutes IDE device I/Os to the synthetic storage device. As a result IDE and SCSI devices both offer equally fast I/O performance when integration services are installed in the guest operating system. There a couple limitations that remain for IDE disks:
  • Disk commands to IDE disks on the same controller are serialized by the guest operating system (note that you can only have two IDE disks on a single controller);
  • The IDE disk is limited to I/O block sizes of 512kb or less – while the SCSI controller can go up to block sizes of 8mb;
However I have yet to see a test where either of these limitations resulted in a noticeable performance difference between IDE and SCSI.

At the same time SCSI has a serious limitation too. So far as SCSI is a synthetic device that uses VMBUS but Hyper-V BIOS has no knowledge of VMBUS, SCSI can't be a boot drive. So the operating system disk must be mounted on the IDE device for the operating system to boot correctly. And this is oblige you to have IDE disk anyway. But Microsoft strongly recommended that you mount the data drives directly to the synthetic SCSI controller if their expected I/O rate is high.

So how about to have two virtual disks for a VM? Actually I don't think it is a challenge. It is generally good practice to separate the OS partition from Data/Workload partitions. So in that respect yes two disks is even a good idea.

Also you need to know about Virtual Hard Disk types. With Hyper-V, your choices include dynamically expanding, fixed-size, differencing and passthrough disks. There are a lot of descriptions these disk types in the internet and for the sake of this tip, I want to focus on fixed and dynamic disks. By default the wizard creates the dynamic disk where you can install an operating system. Such disk saves space on your hard disk, but it is bad choice if you want to get the best performance. Fixed VHD always performs better than a dynamic in most scenarios by roughly 10% to 15%.

So to gain access to configuration options that will allow you to create better performed VM, just skip this step and create the disks independently of the virtual machine later. Then subsequently assign it to the virtual machine after the disks have been created.

Friday, February 25, 2011

WinRM - This user is allowed a maximum number of concurrent shells

I have found that WinRM limits the number of concurrent shells while developing some PowerShell tools that connect around to a bunch servers in parallel. You may get an error like:
Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. This user is allowed a maximum number of 5 concurrent shells, which has been exceeded. Close existing shells or raise the quota for this user.

To raise the limit, run the following commands:

winrm set winrm/config/winrs '@{MaxShellsPerUser="100"}'


The output should now look like this:

AllowRemoteShellAccess = true
IdleTimeout = 180000
MaxConcurrentUsers = 25
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 15
MaxMemoryPerShellMB = 150
MaxShellsPerUser = 100

Friday, February 4, 2011

Show Preview In New Browser Tab Gracefully

I appreciate the preview button feature of the edit html section of this blog-publishing service that first time opens the preview of your blog in a new browser tab(window) and further just refresh it in the same tab. Very handy.
And I was curious how to implement such behavior in my application. I have found that it is very easy.

Usually for the target attribute the _blank value is used. It opens a new tab each time because actually it is a name of the new window. So rather than giving this default value, give a particular name like preview-tab. Then all the clicks should open the content in same window.

Testing your Web Application with QUnit and JQuery 1.5

QUnit is a great framework for testing your JavaScript. With JQuery 1.5 it becomes more powerful. In this blog I'm going to introduce new technic that allows me to test quite complex business logic in my web application in the most clear, concise and readable way.

So, for example, there is a page with some directory index and the 'Create Folder' button. On click it shows a dialog where you have to type a folder name and press the 'create' button. It will send a query to a server that will create new folder on a file system, refresh the directory index and send it back to the client. I agree that it is not so complex scenario. But, how is it possible to test what javascript in your browser creates and sends right query to the server, what the server creates the folder and refreshes the directory index and what the browser again shows this refreshed directory index for you by one run?

I'm sure that devotees of the unit testing will say that it's wrong approach and I should split my test in three parts and test them as modules in isolation. But stop, it's not my way. I want to write acceptance test and be sure that this integral behavior works.

So, of course you can use selenium along with cucumber or fitnesse or any other acceptance testing framework. And earlier I would do it in the same way. But now I know the better approach.

Here is a loader, that will load and execute my test. It has default html layout to be used along with QUnit. Also it includes auxiliary scripts from testLoader.js and of course createFolderTest.js with my test.

A key element here is iframe. I use it to load my test page there. It helps me to divide and incapsulate my test code and production code what allows to manipulate my testing page without reloading my test. BTW, a big thanks to Mike Plavskiy, boss and colleague of mine. It is his idea to use iframe instead of selenium.

Next is a couple of the test loader utility methods and test itself.


The asyncTest call is provided by QUnit and dedicated to test asynchronous code. Such record means that execution of the QUnit code that is in charge of verifying test results will be stopped until the start function is called. You can read more about it in this article. At the same time code inside the asyncTest block is running. First string here is new the Deferred Objects feature of JQuery 1.5. The $.when( frame.go(url) ).then(function(_$)... construction means that as soon as the frame.go method, that loads test page into the frame, is completed a function in the then block, that is a registered callback, will be fired.

Here I test clicking the new-folder button, appearance of the dialog and its title, typing the name of new folder and clicking the create-folder-btn button. The start call executes QUnit's verifier which evaluates the equal statements.

At the same time the query that was fired by the create-folder-btn button is in progress. To verify its result, I start next the asyncTest call. Using the same when-then mechanism I wait while the frame is reloaded with the result of the operation and check it. So, here you are.
My point here is that more complex behavior and scenarios will be clear and readable as well. And I will show it in my upcoming blogs.