Recognized as a MVP!

I’m happy to inform I’ve been selected and award the MVP award by Microsoft! My main subjects has been server virtualization and server clustering. Most of my activities and blogs are related to the HYPER-V though I have a strong attraction to clustering due to the close relationship between 2 technologies most of the time.

I’m focusing more on integrating third party solutions with HYPER-V and providing comprehensive solutions to customers. (Eg: Complete DR solutions using HYPER-V with Double Take)

Hats off for everyone who has been supportive for me and encourage on my activities.



Battery life decrease after the Apple IOS 4 upgrade to the Apple iTouch

I was keep in counting the days until the Apple release the free IOS 4 update for the iPod devices. If I’m correct June 22nd they released the update OS and I managed to upgrade my iTouch OS on 3 a.m.!

Everything is normal for me initially until I notice how quickly the batter drain after 1-2 hours usage of the device. Under the previous OS I didn’t experienced of this apart from occasional playing of SNIPER 3D game. Of course anyone can understand the graphic intensive games like that can drain the juice very quickly from that device. After some forum search I found out I’m not the only moron screaming and blaming Apple for this issues.
Finally I decided to do a hard reset of the device. Most of the forums has clearly mention to backup the data and settings and proceed with that. But as a thumb rule I didn’t do that :) But found out I didn’t loose the data but now the device is functioning smoothly with longer battery life.

In order to do a hard reset hold down the home button and the reset button nearly 10 seconds until the apple log appears. As a thumb rule backup the data!! you’ll not be luck as I’m :P

Remote Desktop virtualization host capacity planning


After the main buzz of the server virtualization the next key thing in the industry is Desktop virtualization or else known as VDI. Some of the enterprise companies looking at this solution as a way to increase their hardware life cycle and for some it is a method to tightly control the access point for their central data access. Never the less this is a important technology for the Service partners as well.

To have a better idea of how to size this solution for customers using HYPER-V  and Windows 2008 R2 connection broker Microsoft has released the new capacity planning document.

named as “Remote Desktop Virtualization host Capacity Planning in Windows Server 2008 R2”. You can get a copy of this article from here.


Hyper-V R2 Component Architecture Poster

If you’re interested in knowing more detail on HYPER-V information under the hood then you’ll find the following poster is for you. Apart from techies even pre sales can find this as useful explaining their customers how exactly HYPER-V  function in various scenarios.

This poster has clearly define with 8 sections namely as,

HYPER-V poster

  • Architecture
  • Virtual Networking
  • Virtual Machine Snapshots
  • Live Migration
  • Storage Interfaces
  • Storage Types
  • Storage Location and Paths
  • Import and Export

    You can download if from here.

  • 4/05/2010

    Apple iPad hits the market

    hi everyone Apple has released the apple iPad officially. According the web reviews people has been awaiting in the queue  to get their hands into this neat device.

    Apple iPad is next generation device which is more focus on using the sensor technology. Since is built with wifi and 3G technology the information access capability is limitless.

    Since I didn’t get my hands into the device it will be difficult to review my honest feeling but as a tech gadget freak I would say I love to get my hands into this device. Well while I’m typing this some guys ahs already gone one step ahead and disabled the device to see what is inside as any techie would do (cough..cough…)

    Anyway I think it’s better to have a look into what is inside this cool gadget. You can find step by step guide how to disassemble this device over here.


    Installing SQL 2008 on a domain controller

    This is something i have seen in the tech forums and as well as tried by myself. So it would be ideal to share this knowledge with others to get benefited.

    one fundamental rule in computer world is security vs productivity balance. Running SQL on a domain controller will expose the AD for too many issues. Following information has been abstracted from various online resources.

    For security reasons, Microsoft recommends that you do not install SQL Server 2008 on a domain controller. SQL Server Setup will not block installation on a computer that is a domain controller, but the following limitations apply:
    -On Windows Server 2003, SQL Server services can run under a domain account or a local system account.
    -You cannot run SQL Server services on a domain controller under a local service account or a network service account. (This is the key issues I had to face)
    -After SQL Server is installed on a computer, you cannot change the computer from a domain member to a domain controller. You must uninstall SQL Server before you change the host computer to a domain controller.
    -After SQL Server is installed on a computer, you cannot change the computer from a domain controller to a domain member. You must uninstall SQL Server before you change the host computer to a domain member.
    -SQL Server failover cluster instances are not supported where cluster nodes are domain controllers.
    -SQL Server is not supported on a read-only domain controller.

    I do hope these information will helpful when you’ve been requested to carry out a SQL setup on a DC :)


    Proud owner of a Apple iPod (3rd Generation)

    Got my hands into the long awaited apple iTouch yesterday. This is 32 GB version. Even though they have 3rd generation iTouch on 8 GB it’s entire design is on 2nd generation. As a technical point of first thing I did is upgrade the firmware :) Well tough to say what kind of improvement has it made since I barely test it even for few hours. So far I’m loving all the features in it specially the wifi access availability and the mail access. Apart from that I really like the idea of voice memos. Even though this is introduced as MP3 player I think it has the essential tools for day to day work.

    One disappointment is missing of the camera. I really wish Apple would have consider that into this device since it is a essential part. Let’s put it this way, so far they have given all the utilities you can imagine of and give the apps to edit pictures even but missing the camera! But again before moving to this I did some web research and found of lots of positive feedback on it.

    I didn’t went for Microsoft Zune due to the limited apps available in the internet when I did the research. But guess MS will find a way to overcome that very soon. Saying that here goes a another member to the Apple iPod community :)


    SQL 2008 high availability through Windows 2008 clustering

    With the introduction of Windows 2008 Cluster service Microsoft has made the clustering as easy as it can for a system administrators. I’ve been blogging about Windows 2008 clustering on my blog and various other third party software you can use to accommodate the clustering. Once the Microsoft released MS Storage server for the TechNet subscribers I have been using that software frequently. Apart from that I found out StarWind software also providing good solutions as well.

    Last month I got the opportunity to demonstrate the Windows 2008 clustering and it’s new capability to the IT Pro community with live demonstration. Too bad I forgot to get some picture on that event :( You can download the presentation I have created for that session from here.

    This month I have received an invitation from the SQL community group to conduct a session about SQL cluster creation. Special thanks goes for the SQL community MVP’s for giving me the opportunity.

    In that session I present about the MSCS (Microsoft Cluster Service) and the new features in the windows 2008 R2 cluster. Apart from that we went ahead of doing a live demo on SQL cluster creation. It too nearly 1.5 hours for the entire demo since I ran everything from singe machine!

    Never the less it very exciting session since I got to interact with the SQL community members. Apart from that participant had various questions about the cluster creation since most of them are database developers and some of them have already work on SQL cluster 2005.

    One thing I briefly went talk about is the cluster migration from SQL 2005 cluster to SQL 2008. Forum members can download the presentation from www.sqlserveruniverse.com.

    Apart from that we went and discuss about various technical setup and configurations you need give attention during the cluster creation and how to do basic level troubleshooting steps.

    SQL MVP Dinesh Asanka presenting the SQL session,

    forum members who won prizes,


    End of Microsoft EBS (Essential Business Server)

    Sadly but truly Microsoft has removed the plug from the EBS server. MS has taken decision since there has been no much sales improvement in the server market for this product. Apart from that most of the SME market are focusing on cloud computing and server virtualization options. EBS is targeted for customer who is having 300 seats (computers), but as the MS has identified a customer who is having 300 computers will be already established company in Infrastructure point of view.

    Even though EBS provide significant cost benefit moving to EBS from existing server environment to EBS is daunting task.  Requiring  minimum 3 servers (And they need to be high end as well) to run the product suit including an additional server for premium version is not a easy financial consideration.So the product will be discontinued from selling from June onwards. But the for the existing customers the product will be supported by MS in the normal product life cycle.

    What I like most is SBS has not been done any changes and most of the customers still benefit that. Personally I prefer SBS product since it is a significant cost saver for SBS market. This product is targeted for companies who is having less than 75 seats.

    SBS Rocks!


    Backup and Recovery methods for HYPER-V

    Most of our customers are having practical issues on how to backup and recover HYPER-V machines. One key thing is HYPER-V  backup procedure or at least I would say approach is different from normal server backup methods.

    If you’re expecting to backup the VPC using Windows native backup software then it is still possible but the issues will be you won’t be able to recover single VPC in a recovery scenario instead entire volume. As you can see this is time consuming but free options always comes with a catch. At least this is a good option to backup your VPC without shutting them down.

    Please refer to this TechNet article which describes solutions in very detail and easy to understand method.

    If you prefer to backup individual VPC level and their data inside separately then you’ll have to consider backup software which is aware about HYPER-V. Microsoft DPM is one such software which is very comprehensive in that area. Some of the resources about HYPER-V backup using DPM can be found over here and here.

    Apart from that Symantec Backup Exec 12.5 is a good software which is capable of backing up HYPER-V  VPC’s and recovering individual file level. (hint…hint guys)

    One practical advise is backup and restore of VPC is time consuming task. so make sure your backup solution has a speedy method of recovery, DPM overlook this sometime back and advise customers to move into disk based backup. Still you have to make sure backup media connectivity with the host server support speedy recovery.

    So one way to minimize of these kind of unhappy moments is to have high availability solutions like HYPER-V host clustering. This will allow you to balance the HYPER-V  workload. More information about this can be found plenty out there when you do a search. One comprehensive article can be found over here.

    We’re looking forward to provide more information about HYPER-V backup from DPM 2010 product in future articles, so stay tuned :)


    Virtualizing Active Directory service

    Most of the time we recommend for customers and partners not to virtualizes the AD server. the explanation we give for this is due to the time sync issue there might be problem. So what is this time sync issue and why we should give more consideration about this too much? In this article I’m going to talk about it little bit and explain a solution for that. As a thumb rule I’ve to update you’ll this is according to my 2 cents knowledge :)

    Normally Active Directory heavily depend on the accurate time for various services (Eg: Authentication, replication, records updates..etc) When the AD is in a physical machine it will use the interrupt times set by the CPU clock cycles. Since it have the direct access to this time can be accurate.

    When you try to virtualized the main problem you face is the virtualized environment behavior. Virtual PC’s are created to save the CPU clock cycles and when one OS is idling then CPU cycles send to that VM will be reduced. Since AD heavily depend on this CPU cycle missing them randomly means the time won’t be accurate. This problematic behavior is same either you’re using VMware, HYPER-V or any other third party virtualization technology. Once the clients and server having mismatch of time sync more than 5 minutes authentication and network resource access will be difficult. (Windows AD environment uses Kerberos authentication and by default time difference allowed is 5 minutes)

    So one method is allowing the PDC emulator service holder AD server to sync time with an external time source instead of depending of the CPU clock cycles. To do that you have to edit the registry on the PDC emulator holding server. (As usual I assume you guys will take the necessary precautions like backing up server, registry…etc)

    1. Modify Registry settings on the PDC Emulator for the forest root domain:
    In this key:
    • Change the Type REG_SZ value from NT5DS to NTP.
    This determines from which peers W32Time will accept synchronization. When the REG_SZ value is changed from NT5DS to NTP, the PDC Emulator synchronizes from the list of
    reliable time servers specified in the NtpServer registry key.
    • Change the NtpServer value from time.windows.com,0x1 to an external stratum 1 time source—for example, tock.usno.navy.mil,0x1. More time servers information can be found over here.

    This entry specifies a space-delimited list of stratum 1 timeservers from which the local computer can obtain reliable time stamps. The list can use either fully-qualified domain
    names or IP addresses. (If DNS names are used, you must append ,0x1 to the end of each DNS name.) In this key:
    • Change AnnounceFlags REG_DWORD from 10 to 5. This entry controls whether the local computer is marked as a reliable time server (which is only possible if the previous registry entry is set to NTP as described above). Change the REG_DWORD value from 10 to 5 here.
    2. Stop and restart the time service:
    net stop w32time
    net start w32time
    3. Manually force an update:
    w32tm /resync /rediscover
    (Microsoft KB article # 816042 provides detailed instructions for this process.) Apart from that you can refer to this link as well.

    As a thumb rule test this before applying for the production network. This is recommend if your organization preparing to move to 100% virtualization environment. If not at all cost try to have one DC in a physical server :)


    Windows server 2008 R2 failover clustering

    People who attend to the hands on workshop in the above topic in Tech.Ed 2010 I do hope you found my demonstration is valuable and got something out of it. During that time I’ve demonstrated how simplified process in creating basic cluster scenario in Windows 2008. Entire lab has been carried out in one single laptop and I know patient is a virtue at that time :)

    During that time I used SatrWind product for the software based iSCSI solution which has been working as charming as it can be. I have been using this product for demos most of the time and really amazed by it’s simplified GUI console. But don’t think this is a simple software, underneath you’ll find some advance features built inside it. I’ve been blogging about this product for several times since I see a growth software based SAN solutions in the market.

    So if anyone interested in demonstrating the clustering features in Windows 2008 you can download that slide deck from here. I have to admit, I’ve used various resources and slides from the other people as well. So I thank them all for that as well.

    So as I always mention do contact me if you’ll need more support to build a affordable SAN solutions.


    AD DS: Database Mounting Tool (Snapshot Viewer or Snapshot Browser)

    With Windows 2008 Microsoft introduce a new tool called Active Directory database mounting tool (Dsamain.exe) This was referred as Snapshot viewer and Active Directory data mining tool during the early release of the Windows 2008. The cool thing about this tool is you can take snapshots of your AD database and view it offline.

    As for Microsoft explanation this is really helpful in Forest recovery and AD auditing purpose. In the case of AD objects deletion you can load a snapshot and compare your current AD alone with it.

    Before the Windows Server 2008 operating system, when objects or organizational units (OUs) were accidentally deleted, the only way to determine exactly which objects were deleted was to restore data from backups. This pain behind this is:

    • Active Directory had to be restarted in Directory Services Restore Mode to perform an authoritative restore.
    • An administrator could not compare data in backups that were taken at different points in time (unless the backups were restored to various domain controllers, a process which is not feasible).

    but one thing to notice is this is not a method to recover deleted objects but merely a method to show to you what has happened by doing a comparison. Apart from that you’ll need to be a member of the Enterprise admins or domain admins group, or else given particular rights for a user account.

    Now getting back to the actions, to get snapshot, mount them and view them you need to know about 3 tools,

    1. NTDSUTIL – Create, delete, mount, list the snapshot.

    2. Dsamain.exe – This will allow us to expose snapshot to LDAP servers.

    3. LDP or Active Directory Users and Computers MMC to view the mounted snapshot.

    So the steps going to be as follows,

    1.    Manually or automatically create a snapshot of your AD DS or AD LDS database.
    2.    Mount the snapshot.
    3.    Expose the snapshot as an LDAP server.
    4.    Connect to the snapshot.
    5.    View data in the snapshot.


    Manually creating the snapshot of the AD DS

    1. Logon to a Windows Server 2008 domain controller.
    2. Click Start, and then click Command Prompt.
    3. In the Command Prompt window, type ntdsutil, and then hit Enter.
    4. At the ntdsutil prompt, type snapshot, and then hit Enter.
    5. At the snapshot prompt, type activate instance NTDS, and then hit Enter.
    6. At the snapshot prompt, type create, and then hit Enter.
    7. Note down the GUID return by the command.

    1-28-2010 11-05-13 AM 1-28-2010 11-07-43 AM

    1-28-2010 11-08-27 AM

    Mount the snapshot

    1. If you didn’t close the previous window just go for it again and type list all and press enter.
    2. Once you get the list of the snapshots you can select a snapshot to mount. In this scenario type mount 2 and press enter.
    3. If the mounting was successful, you will see Snapshot {GUID} mounted as PATH, where {GUID} is the GUID that corresponds to the snapshot, and PATH is the path where the snapshot was mounted.
    4. Note down the path

    1-28-2010 11-11-35 AM 1-28-2010 11-13-14 AM

    1-28-2010 11-13-23 AM

    Expose the snapshot as an LDAP server

    Ok so far we manage to create a snapshot and mount it. Now we need to expose the snapshot so we can view it from LDP utility or by using ADUC mmc. In this scenario we’re going to use the second utility (Active Directory Users and Computers)

    1. Open a new command prompt

    2. In the Command Prompt window, type dsamain /dbpath C:\$SNAP_201001281107_VOLUMEC$\WINDOWS\NTDS\ntds.dit /ldapport 51389 (instead of using the default 389 port we’re using a alternative port the snapshot to minimize any conflicts with the live AD DS)
    note: “C:\$SNAP_201001281107_VOLUMEC$” is the path we got few steps before and represent the snapshot mounted path in our system.

    3. "Microsoft Active Directory Domain Services startup complete" will appear in the Command Prompt window after running the above command. This means the snapshot is exposed as an LDAP server, and you can proceed to access data on it. NOTE: Do not close the Command Prompt window or the snapshot will no longer be exposed as an LDAP server. 

    1-28-2010 11-31-58 AM 1-28-2010 11-32-11 AM

    Connect to the snapshot

    We can use any utility which can read the LDAP data. In this demonstration as I mention earlier I’ll go ahead and use the Active directory Users and Computers snappin.

    1. Open the ADUC.
    2. Right click the ADCU and select “Change domain controller” option.
    3. Type the domain name with the custom port number eg “CONTOSO-DC:51389”
    4. Now you’re looking at the data in the snapshot. Go ahead and open a another ADCU window and that will open the current AD DS.
    5. Go ahead and do a change on the live AD DS and then check the 2 MMC’s again. You’ll see the snapshot data is not getting changed.

    1-28-2010 11-32-42 AM 1-28-2010 11-33-03 AM

    1-28-2010 11-34-11 AM

    So as I mention this is really cool feature and saves lot of time. If you don’t like creating snapshots manually you can create a schedule task and automate this to create snapshot automatically. One concern is these snapshot are not encrypted so if this gets to wrong hand it is bad for you guys. So try to keep them in a safe location and try to encrypt them for added security.


    Giving attention to good old redirusr and redircmp commands

    I’ve been meddling with some GPO issues and then came across these 2 commands. These commands has been the with Windows 2000 and 2003. So what bring my attention to these commands is how can you use them to comply with Security auditing. More information about how to use this commands can be found over here.

    Well first we’ll take an example about an Enterprise company. Most of the time AD admin will get a mail or a request from HR or from a relevant department requesting to create a new user account. Once you get that request you’ll create those user accounts and by default they will be going to the Users section in ADUC. Due to your busy schedule you’ll forget to transfer the relevant user account to the correct OU. Event though this will be a matter of few hours or few days delay moving the account to relevant OU in computer security wise big risk!

    One way I can think of eliminating or minimizing is whenever you create new user account or new computer added to the domain they will be moved to a different OU which have unique GPO’s assign to them. So in that particular GPO you can edit the security setting which will comply with the company IT security policy and give minimal user rights until user account moved to correct OU :)

    In a nutshell this will be seen as a simple thing but overall compared to IT security a big step. So go ahead roll your sleeves and give it a try in your company network and be safe!


    Supporting Exchange 2007 on Windows Server 2008 R2

    Well another good news for customers and partners. Microsoft has demonstrated and proved they are indeed listening to customer and partner feedback. Exchange 2007 product team has taken a decision to support Windows 2008R2 platform. Most of the customers are running Exchange 2007 and they will not have quick plan to move to Exchange 2010 but they will still prefer to have their operating system to have latest version for improved manageability.

    More information about this decision has been blogged over here.


    Bring your Own Computer (BYOC) to work

    Well this has been once debatable question or rather I would say adoptable method carried out by some companies. Microsoft,Intel & Citrix are some companies who adopt this and they have already carried it out in several region offices. Recent economic situation has given most employees green light for this. In a way I see this as a good thing and I started adopting this almost before big companies decide about it. Actually in year 2007 :)

    We as technical persons cannot be locked down for 8 –5 usual office work hours, sometimes we work from home and until late night. Apart from that companies prefer to get maximum benefits out of the employees apart from that HR keeps on trying making the life comfortable for the work force. (Weather they success or not is a different question) My point is everyone want to be happy and still not compromise the rules right? Well in that case BYOC is a good method for several reasons,

    1. Employees will have their personal laptop and can work from anywhere, which I call freedom and flexibility

    2. Employer cannot afford all the latest hardware to be given to employees all the time to carry out their work and replacing the hardware annually. But they can lend some money to employees to have their own machine with certain legal condition, and this will be fraction of the cost of their annual IT budget.

    3. Employees have the flexibility to work and same time have break and use it more meaningfully to interact with friends and colleagues via MSN, other IM’s and social networks. (Eg: Face Book) I know some companies will see FB as a bad thing but again fundamental rules work out over here, trust between employee and the employer. I also agree not wasting time on FB doing farming or playing games in office working hours. Keep that for OOOH (Out Of Office hours)

    So on even you’ll can figure out various benefits which is good for both parties. With every new concepts comes some raised concerns and same goes over here.

    1. Security – Well this is something for the IT department to come up with. Do you really think BYOC is the only major issue? think about the other methods your network can compromise. What we should really care about is how to make sure company main servers and confidential data can be secured properly. I have seen many times it comes to the boiling point of servers not been secured with the recommendation security patches and security policies. Now it’s time to go and have  a second look at the security aspects more deeply.

    2. Cost – As I mention this will be lot less if you plan if carefully. Since you’re not going to spend so much money but lend some money for the employee to buy his/her own machine with relevant terms and condition. But please remember this option is not applicable for all the companies and this has to be evaluated even department level as well.

    3. Security Policy – Well companies can have that hefty security policy guideline books with them still :) Well my point is you can still apply some general rules and terms and evaluate your security polices and try to balance everything. If you’re so much concern about the desktop environments then this is the time you can even evaluate the VDI (Virtual Desktop Interface) Microsoft and Citrix is offering pretty cool solutions for this. I think the way we moving forward with year 2010 VDI will be a good option for companies to consider.

    So in a nutshell those are my opinions about BYOC and I agree with this trend and the question is do you? Share your thoughts about it and see if we can change the working environment for more friendly flexible and sexy!!!! I mean with cool laptop models people :)


    Windows 7 deployment using image capturing

    With introduction of the Windows Vista Microsoft introduce the image capturing method. Earlier we used to reply on Symantec ghost, Acronis…etc. Now Microsoft has given complete free tool set to do image capturing and deployment. One advantage I see in this method is the images going to be captured using the Microsoft given tools are,

    • One image for many hardware configurations
    • Multiple images in one file
    • Offline servicing of the image file
    • Installation on partitions of any size
    • Bootable image support for Windows PE
    • Modification of image files using APIs

    Of course if you do further search you’ll find many more options and advantages. In this article I’ll guide you how to capture Windows 7 installed PC using Imagex command and then deploy it to different PC. Of course this can be customize and make it Zero touch deployment with advance tools like SCCM, but that will be another article :)

    Ok to start first you’ll need following items,

    • Active Directory environment (DC with DHCP, DNS roles enabled)
    • Windows 7 PC installed with Windows 7 AIK (Automated Installation Kit)
    • Windows 7 Pc with all the necessary software preinstalled to be captured as reference image.
    • Another PC ready ready without any OS. Network card need to support PXE.

    In my article the above mention lab has been carried out on HYPER-V environment. All of them are virtual PC’s. The power of virtualization really shines over here :)

    Now I assume you’ve already setup the Domain Controller with functioning DNS and DHCP and also one Windows 7 PC installed with downloaded Windows AIK. (since that part is easy)

    Now back to work. First I took a virtual PC with windows 7 and MS office 2007 preinstalled. In your case you can install all the application you normally use in your production environment.

    1-16-2010 11-32-11 PM

    Once all the applications has been installed go ahead and remove the static IP settings and configure to get an IP from the DHCP server. Since we plan to do a image capturing we don’t want the same IP to be duplicated to all the PC, right?

    1-16-2010 11-33-03 PM 1-16-2010 11-33-52 PM

    After that go ahead and launch the sysprep command. This command will make sure all the unique data and settings will be removed from that reference PC.

    1-16-2010 11-38-40 PM 1-16-2010 11-44-23 PM

    once the PC has been generalized go ahead and start it from the Windows PE CD. How to create a Windows PE cd can be found over here. Since I’m doing everything in Virtual environment these pics will show how to assign the iso image and also how configure a legacy network adapter for that image. In HYPER-V only legacy network adapter will support getting IP from DHCP when booting.

    1-16-2010 11-50-46 PM 1-16-2010 11-51-21 PM

    1-16-2010 11-52-26 PM 1-16-2010 11-52-43 PM

    now once booted from the PE cd we’ll go ahead and map a network drive to export the capturing image. After that run the imagex command to capture the image.

    1-16-2010 11-54-33 PM 1-16-2010 11-58-59 PM

    Once the image capture is completed (how long will it take to capture the image will depend on the amount of data you have in the reference PC) Take the same Windows PE CD and boot the machine which is not having any operating system. One you boot to the command prompt again map the network drive by using net use command and then import the captured image using the imagex command.

    1-17-2010 1-01-37 AM

    Well once that completed you can restart the PC and start the PC with OOBE (Out of the Box Experience) In that scenario you can provide a computer name, user name..etc. So as you can see the entire process is that that difficult and compared with the benefits you can get out of image based deployments. Microsoft MDT 2010 is a good tool to use to automate this process if you have a requirement to deploy Windows XP, windows 7 or Windows 2008 for few hundred computers. Apart from that have a look into the following TechNet articles as well,

    DISM , MDT2010 , SCCM

    Enjoy with these tools and doing your own experiments.

    My new class for Windows 7

    I started my new batch on Windows 7 Client configuration. This is the first batch in NetAssist training institute of the particular module. Microsoft has greatly enhance their books and training material this time more focusing on practical side. Students has been given enough reading information be referring to the TechNet links. This is a good adoption due to the TechNet will be regularly updating with latest technical updates.

    My first two weeks has been spend on deploying Windows 7 via image creation and distribution via various methods available in Microsoft. One key advantage is students will be experiencing HYPER-V interface since all the virtual pc’s has been configured in HYPER-V!!

    Since virtualization has been my favorite area I had good time explain the features and features behind HYPER-V as well :) 


    Which version of HYPER-V should I use?

    Normally when you have top many options in the same products it makes too much confusing. Sometime this is given for you to make your life easier but still there are chances it can burden you when you don’t have proper instructions and guidance. Same story goes in HYPER-V as well. Microsoft offer HYPER-V in several editions and knowing which version to purchase or get free depend on what are you going to do with it. Apart from that I wanted to highlight the new command available in HYPER-V configuration in server core edition. “sconfig.cmd” is a graphical command available in the server core to configure server. This is updated with new sets of commands which make HYPER-V managing administrator’s life easier.
    Now without further due let me introduce one of the charts available in the Microsoft web site which explains which edition to choose.

    Apart from server consolidation some of the other areas where you can use HYPER-V are,

    * Test and Development
    * Server Consolidation
    * Branch Office Consolidation
    * Hosted Desktop Virtualization (VDI)

    Microsoft free HYPER Visor is good option for testing and R&D. If you are planning to consolidate more than 4 servers in one physical server then moving to Data center version will do huge cost saving to you. More information of these licensing and how to maximize your investment on this HYPER-V can be get on Tech.Ed 2010. Look forward to see ya in there.


    Tech.Ed 2010 in Sri Lanka

    Hi everyone, we’re so proud to present Tech.Ed in Sri Lanka. Microsoft Sri Lanka has taken great initiate step organizing this event. We believe year 2010 going to be the ICT year and there will be so much improvement in our ICT sector in Sri Lanka. Same time we expect a boom in the Enterprise sector the usage of IT to increase their productivity and reduce the cost.

    Tech.Ed will be starting on Feb 09th. Currently registration is open to everyone. This is one of the updated news I received via FB.

    “Tech.Ed Sri Lanka 2010 standard price: SLR 12000/=
    Register for Tech.Ed Sri Lanka now and save 10% on the standard price. Don’t delay… This Offer Expires 25th of January 2010.

    So go ahead and grab your seat guys. As I mention this will be a great opportunity to experience a whole new level of Microsoft Technology and get in touch with industry experts and raise your questions.


    SCVMM 2008 R2 documentation update is out!

    If you’re using Microsoft HYPER-V  as you main stream virtualization platform then you know SCVMM is the centralized management console to mange several HYPER-V hosts.  Apart from that is have the capability to manage cross different virtualization technology hosts as well (Eg: ESX)

    Since SCVMM is a dynamic product which keeps on evolving all the time new updates and hot-to guides appear frequently. Microsoft team recently released some of the documentation updates. You can reach them here. Apart from that one of the best place to hang around and get the latest info would be HYPER-V @ TechNet.

    As per my personal view year 2010 –2012 would be the peak time Sri Lankan market would adopt Virtualization. Most of the time Enterprise companies has been in the observation and internal review about virtualization and how to adopt for that. Since virtualization is a vast area ISV’s will have a great opportunity to provide the ideal solutions.




    CCleaner supports Windows 7

    How many times you have notice or suffered from PC / Laptop getting slow and sluggish compared to your initial usage with that system. We tend to keep filling it up with various software and downloads. Even though we try to do disk cleanup and disk defragment that doesn’t going to clean the windows registry (The nerve system of Windows)

    As a solution I have been using registry cleaning software from windows 98 stage.  When it comes to Windows XP  have found out CCLeaner is the best for that plus it is free. Apart from registry cleaning it has the capability to tweak the registry, remove stale software records from control panel…etc. Removing browser cache and temporary files is one key improvement in this software. Not only internet explorer , CCleaner support can identify most popular web browsers as well. So much options under the category of free :) Unfortunately when I moved to Windows 7 it didn’t had official plans or release dates to support for Windows7. Despite of running the application under Windows XP compatibility category I didn’t wanted to run this application. One main reason is it won’t recognize the Windows 7 registry and can make severe damage.

    Recent visit to their web site gave me the eye opening about the compatibility with windows 7! Some of the key improvements in the latest release are,




    CCleaner v2.27
    - Added support to Wipe MFT Free Space.
    - Added cleaning for Windows Explorer breadcrumb bar.
    - Added cleaning for Opera beta versions.
    - Added option to turn off CCleaner jumplist tasks on Windows 7.
    - Added menu option to Registry Cleaner to add key to exclude list.
    - Improved support for SeaMonkey 2.0.
    - Improved Windows 7 Taskbar progress accuracy.
    - Improved minimize to system tray functionality across all platforms.
    - Improved Opera cleaning by including opcache and temporary_downloads folder.
    - Improved performance of Compact database routines.
    - Improved Wipe Free Space code to avoid locks.
    - Improved security when deleting files.
    - Minor GUI and interface improvements.
    - Minor bug fixes.

    Give it a try and see how it perform for you. You can download the latest version from here.


    Microsoft Rental Rights Licensing scheme

    Beginning of the 01st January 2010 Microsoft has started the rental scheme of their Operating system and Office products. If you’re a partner who is renting or lending machine to customers for projects or for training this will be a good news for you. You don’t have to put a hefty price on your rental price tag for the OS you’re preinstall and delivering. As per Microsoft resellers advantages are as follows,

    Rental Rights licensing offers Microsoft resellers a range of benefits, including:

    • Customer satisfaction. You now have a way to sell licenses that fit your customers’ business models, help ensure their compliance, and solidify your role as a trusted advisor.

    • Convenience. No special tools, processes, reporting, or paperwork are necessary; the transaction works like any other license transaction.

    • Revenue. Selling the new licenses means new revenue.

    • Flexibility. Just like with other Volume Licensing SKUs, you have the flexibility to determine the pricing for your customers and to run promotions.

    In way this is a welcome method to reduce the piracy of software and give the freedom to comply with licensing as well. More information can be found over here.

    Apart from that their are certain restrictions for this scheme as well, Those are as follows,

    Rental Rights licenses are user rights licenses only (they do not include software), so no media fulfillment is involved. The following important limitations apply to the Rental Rights licenses:

    • Perpetual license. A Rental Rights license is permanently assigned to a specific device and may not be reassigned to another device. When the device reaches its operational end-of-life, so does the license.

    • Remote access. Rental Rights do not allow for remote access to software.

    • Separate devices. Use of additional copies of the qualifying software on a separate portable device or a network device is not allowed.

    • Additive license only. Rental Rights licenses are not stand-alone product licenses and do not replace customers’ underlying Windows desktop operating system or Office system licenses; Rental Rights are additional licenses that modify the underlying license terms, allowing for rental, lease, and outsourcing of desktop PCs with licensed, qualifying Windows desktop operating systems and licensed, qualifying Office systems.

    • Virtual machines. Rental Rights do not account for software used within a virtual (or otherwise emulated) hardware system. In other words, the primary customer may not create and rent virtual machines.


    Few IT Solutions for SMB/SME market

    Despite the number of people in a company from business perspective, SMB and Enterprise have similar requirements & request from the Information Technology. They all expect the service continuity, anywhere access and low cost! During this time period every company dream is to get maximum out of the IT investment and still reduce the cost without loosing the functionality. Business continuity is a key factor for survival of any business. Service disruption for few minutes to few days impact can be devastating depend on the business nature. So how can SMB market segment overcome these limitations with fraction of the cost where Enterprise companies invest on?

    To make things simple in this article I’ll focus on Microsoft products and the features offered by them. But as usual hints will be provided for the similar feature products as well :)

    1. Which Operating Systems to invest on by SMB customers – My 2 cents advise goes for SBS 2008 or EBS 2008. There are significant advantages on these operating systems once properly configured and used. Less attention is been given due to the nature of the product names. Small business Server itself is not a product to be taken lightly, the solution is far more complex than the out of the box. If you’re company fallen under SME segment then consider the scale out product like Essential Business Server which can be spanned into 3 physical servers or virtual servers. Again these are Enterprise class ready product which has been limited only be the CALS and not by reducing any FEATURES. (Period)

    2. Cost cutting on Hardware and software purchases – Consider HYPER-V for server virtualization. It will be ideal if you can consider few of your legacy applications to run in their own OS environment to make them less conflict with the latest operating system. Believe me Virtualization will be the ideal solution for this.

    What ever your next purchase make sure it is 64bit and Virtualization capable. Always make sure you have enough hardware expansion room. (Eg: Buy 2 processor socket system with one physical processor, buy RAM with enough RAM slots.) Make sure your existing hardware can be utilized as Storage systems. There are easy ways to convert your existing servers into cost effective SAN storage and make maximum out of it. Microsoft offering of SAN software will be coming on OEM so you can consider a product like StarWind iSCSI storage. (more information about how-to articles in future)

    3. Backup and Protect you data – This is part of your service continuity and availability plan. If you’re going to have HYPER-V as your virtualization option consider how to backup the virtualized environments as well. From Microsoft point of view DPM 2007 (Data Protection Manager) will be the ideal solution to protect your physical and virtual environments. DPM 2010 can be expected around Q2 in year 2010 with lots of new improvements along with desktop backup and offline laptop backup as well.
    when it comes to DR solution and high availability options SMB market has been backed away by the pricy hardware devices and software. Thanks for various replication technologies and offline backup options this is becoming reality to SMB market as well. Microsoft is working closely with ISV partners to make sure software solutions exist for data replication with DR sites. As I mention StarWind is a very popular company coming up with these solutions. Best of all these solutions are costing a fraction of price of DAS or Hardware SAN with HBA adapters.

    4. Consider managed services or hosted solutions when you don’t have the necessary skills in house. hosting your E-mail accounts under your company domain name is no longer a dream. You can have your own Exchange server e-mail accounts, access your mails from mobile (OWA, OMA) with a fraction of the cost of having in-house. The key question is to indentify the margin of in-house vs hosting solutions. For few SMB markets in-house might not be the ideal and much more preferred way of controlling how things happen. But that comes with additional price tag of hardware and maintenance model.

    So as you can see SMB / SME market having so many options when it comes to cost reduction and still balance their business requirements. It’s all about taking the right moves knowing their business requirements and limitations. Let me know if anyone interested on these solutions and would be glad to provide more information.

    windows_small_business_server_2008 logo

    Technorati Tags: ,


    Reality of server virtualization and cost reduction

    Is your company ever facing the challenge of effectively managing the servers and infrastructure growth? Is server virtualization been shown to you as the only resolution to overcome the problem along with the Green IT concept? Sadly in Sri Lanka Green IT concept is not widely being adopted even though reducing $$$ really makes sense for us :)

    So most of us has been hearing about server virtualization offered by various vendors. (Microsoft, VMware, Citrix….etc) but in order for a company to move to a virtualization road what are the facts need to be considered. Virtualization concept has been expanded from server virtualization to desktop virtualization, application virtualization so a decision maker need to take careful consideration which option is the best for their company to adopt. Let’s look at some of the key facts,

    1. Reducing cost – At a glance this seems to be the biggest favor factor. Server consolidation of 3:1,  2:1 and even 10:1 or more. But the missing picture is this true to my company? Can I really archive that kind of consolidation ratio? This is a question need to be ask from your IT department. How can I know which servers are the best candidates to consolidate? Depend on the answers we’re getting out of that, companies need to decide virtualization strategy.

    2. Power / Cooling saving – These are some of the direct benefit of the virtualization. Reduce the server for print in your data center will lead for you to spend less money on the power and data center cooling.  

    3. Standardization / Compliance – These are some of the indirect benefits companies can achieve. System administrators will be able to better manage server operating systems and applications in the virtualizations systems by using system management products. Bringing the entire data centers into standardized environment will bring better Service Management and also will help companies to compliance with industry regulations.

    Looking at a glance we have so many advantages moving to server consolidation and better utilization of existing hardware through server virtualization. So what is the catch the hidden untold story of the server virtualization?

    1. Possible candidates -Indentify the server which can be virtualized. Not all the servers in your company can be virtualized immediately. First rule is to identify the servers which can be virtualized. MAPS (Microsoft Assessment and Planning Service Toolkit) is a good tool for this job. MAPS is a tool beyond server virtualization identification. Check for your self on the MS web site :)

    2. Availability- When you have separate physical servers to run your company application loosing one server is not a major problem for you. Once you do the server consolidation of 3:1 what if the only physical server holding that 3 server roles goes down? How that will impact the company overall functionality? Not a happy picture right? Server availability is a critical factor to be overseen when it comes to server virtualization. Plan well ahead how you going to protect those servers from failures. This is a part not being added to the $$$ but need to be given careful attention. Additional hardware or backup hardware or software attention must be given,
        Server virtualization not always going to utilize your existing servers. As I have seen in our market most of the companies are still having 5-6 old servers in production. You’ll need to invest on new hardware. Better of all have a god planning on how to recover from one physical server which host several servers.

    3. Server sprawling -  Creating virtual servers unnecessary or with less control to mange the overall environment will lead to this. Fortunately this is not a question or as situation our market will lead now :)
    Once you have move into the virtualization road you have to have better control of what server how and where you create them. Server virtualization will loosen the bond of operating system with physical servers. If you’re not careful enough you’ll end up creating unnecessary number of virtual servers and resource wastages instead of resource saving. Fortunately VMware center, SCVMM is there for the help but end of the day ultimate control is with humans who are vulnerable for making mistakes :)

    4. Cost – Virtualization will itself is not going to bring immediate ROI if you didn’t plan well but another over burden to our IT budget. Additional storage cost, network equipment, servers….etc you name it cost will go if you didn’t identify the above mention goals properly. Know what you need to virtualized and how to protect them as well.

    So where we end up on server virtualization? How can SMB, SME or Enterprise enjoy this technology? Number one rule is take time for careful planning. Use the given free tools to better understand your environment and plan the resources well ahead. Virtualization is not only limited to server side so you can be carried out by many factors. Know the company business objective and how to drive them with virtualization. None of the above mention facts are to keep you away from virtualization. My honest opinion is virtualization (may it be server, application…..etc) is really good but little bit of extra planning will make your life easier.

    Happy virtualization!


    Windows 2008 Failover Clustering setup (101 guide)

    Before jumping into the High availability it would be a really good if all of readers can sit on the same seat about the clustering technology as well. Recently enough I went through the history of the clustering to get and idea about it, interestingly enough there are lot more than meets the eye on clustering :) Some history info about clustering can be found over here

    What is clustering - In its most elementary definition, a server cluster is at least two independent computers that are logically and sometimes physically joined and presented to a network as a single host. That is to say, although each computer (called a node) in a cluster has its own resources, such as CPUs, RAM, hard drives, network cards, etc., the cluster as such is advertised to the network as a single host name with a single Internet Protocol (IP) address. As far as network users are concerned, the cluster is a single server, not a rack of two, four, eight or however many nodes comprise the cluster resource group.

    Why cluster - Availability:  Avoids problems resulting from systems failures.
                        Scalability: Additional systems can be added as needs increase.
                        Lower Cost:  Supercomputer power at commodity prices.

    What are the cluster types -

    • Distributed Processing Clusters
      • Used to increase the speed of  large computational tasks
      • Tasks are broken down and worked on by many small systems rather than one large
      • system (parallel processing).
      • Often deployed for tasks previously handled only by supercomputers.
      • Used for scientific or financial analysis.
    • Failover Clusters
      • Used to increase the availability and serviceability of network services.
      • A given application runs on only one of the nodes, but each node can run one or more applications.
      • Each node or application has a unique identity visible to the “outside world.”
      • When an application or node fails, its services are migrated to another node.
      • The identity of the failed node is also migrated.
      • Works with most applications as long as they are scriptable.
      • Used for database servers, mail servers or file servers.
    • High Availability Load Balancing Clusters
      • Used to increase the availability, serviceability and scalability of network services.
      • A given application runs on all of the nodes and a given node can host multiple applications.
      • The “outside world” interacts with the cluster and individual nodes are “hidden.”
      • Large cluster pools are supported.
      • When a node or service fails, it is removed from the cluster.  No failover is necessary.
      • Applications do not need to be specialized, but HA clustering works best with stateless applications that can be run concurrently.
      • Systems do not need to be homogeneous.
      • Used for web servers, mail servers or FTP servers.

    Now coming back into the Microsoft clustering clustering goes back to good old NT 4.0 era with the code name “wolf pack” All this time it came all the way step by step growing and shine on Windows 2000 period giving the confidence for customers on the stability of the Microsoft clustering technology. If there are filed engineers who have configured the Windows 2003 clustering will know the painful steps they have to take to configure the clustering Steps are very lengthy. When it comes to Windows 2003 R2 Microsoft offered various tools and wizards to make the clustering process less painful process to for engineers. If you’re planning to configure windows 2003 clustering one place you definitely look into is this site.

    Now we’re in the windows 2008 era and clust4ering has been improved dramatically in the configuration side and as well as in the stability wise. New names for the clustering goes as “Windows failover clustering

    As I have been updating the audience in public sessions clustering is no longer going to be a technology focus by Enterprise market. Clustering can be utilized by SMB and SME market as well with a fraction of the cost.  As usual I will be focusing on the HYPER-V  and how combine with clustering can help the users to get the maximum benefits out for virtualization and high availability. HYPER-V  been Microsoft flagship technology for the virtualization. It’s a 100% bare metal hyper visor technology. There are lot of misguided conception on HYPE-V is not a true hypervisor, the main argument point highlighted is you need to have windows 2008 to run the HYPER-V. This is wrong!!! You can setup on the HYPER-V hyper visor software in bare metal server and setup the virtual pc’s. HYPER-V only free version can be download from here. Comparisons on HYPER-V can be found over here.

    So now we have somewhat idea about the clustering technology so how can it applied to the HYPER-V environment and have a high available virtual environment? We’ll have a look at a recommended setup for this scenario,


    According to the picture we’ll need 2 physical servers. We’ll call them Host1 and Host2. Each host must 64bit and have Vitalization supported processor. Apart from that Microsoft recommended to have certified hardware by MS. so base on my knowledge I would say ideal environment should be as follows,

    1. Branded servers with Intel Xeon Quad core processor. (better 2 have 2 sockets M/B for future expansion.)
    2. 8 GB memory and minimum 3 nics. always better to have additional nics.
    3. 2*76 GB SAS or SATA HDD for the Host operating system.
    4. SAN Storage. (Just hold there folks, there are easy way to solve this expensive matter….:)




    Now the above system has the full capability to handle decent amount workload. Now the configuration part :) I’ll try to summarize the steps along with additional tips when necessary,

    1. Install windows 2008 Enterprise or Datacenter edition to each Host computer. Make sure both of them get the latest updates and both host will have same updates for all the software.

    2. Go ahead and install the HYPER-v role.

    3. Configure the NIC’s accordingly. taking one host as the example NIC configuration will be as follows,
        a) One NIC will be connected to your production environment. So you can add the IP, DG, SB and DNS
        b) Second NIC will be the heartbeat connection between the 2 host servers. So add IP address and the SB only. Make sure it will be totally     different IP class.
        c) Third NIC will be configured to communicate with the SAN storage. I'm assuming we’ll be using iSCSI over IP.

    4. Now for the SAN storage you can go ahead and buy the expensive SAN storage for HP, DELL or EMC (no offence with me guys :) ) but their are customers who can’t afford that price tag. For them the good new is you can convert your existing servers into a SAN storage. We’re talking about converting you're existing x86 systems into Software based SAN storage which use iSCSI protocol. There are third party companies which provide software for this. Personally I prefer StarWind iSCSI software.
    So all you have to do is add enough HDD space to your server and then using the third party iSCSI software convert your system to SAN storage. This will be the central storage for the two HYPER-V  enabled host computers.

    4. Go ahead and create the necessary storage at the SAN server. How to create the cluster quorum disk and other disk storage will be available from the relevant storage vendor documentation. When it comes to quorum disk try to make it 512MB if possible but most SAN storage won’t allow you to create a LUN below 1024 MB so in that case act accordingly. (Anyway here goes few steps how to create relevant disks under StarWind)

    Starwind-2 Starwind-3 Starwind-7

    Starwind-8 Starwind-10

    5. Go to one host computer and then add the Clustering feature.

    Cluster feature

    6. Go to the iSCSI initiator in the Host1 and then connect to the SAN storage.  As seen on the picture click add portal and enter the IP address of SAN storage.  One connected it’ll show the relevant disk mappings. (That easy in Windows 2008 R2 now)


    iSCSI-4 iscsi-win7-init

    7. Once that complete go to Disk management and unitize the disk and format them and assign drive letters accordingly. (Eg: Drive letter Q for Quorum disk…etc)

    12-21-2009 5-01-46 PM 12-21-2009 5-02-03 PM 12-21-2009 5-02-40 PM

    12-21-2009 5-05-07 PM

    8. Go to Host2 open iSCSI imitator and add the SAN storage. Go to Disk management and add the same drive letters to the disks as configured on Host1.

    9. Go to cluster configuration and start setting up the cluster. One cool thing about Windows 2008 cluster setup is cluster validation wizard. It will do a serious of configuration checkup to make sure if you have configured the cluster setup steps correctly. This wizard is a must and you need to keep this report safely in case if you need to get Microsoft support or a technical personas support. One the cluster validation completed we can go ahead add the cluster role. In this case we’ll be selecting File Server as our cluster role.

    12-21-2009 5-11-18 PM 12-21-2009 5-11-40 PM 12-21-2009 5-11-46 PM

    12-21-2009 5-12-08 PM 12-21-2009 5-17-54 PM 12-21-2009 5-20-38 PM

    10.  Once the cluster validation is completed, go ahead and create a cluster service. In this demonstration I’ll use clustered file server feature.

    12-21-2009 5-25-35 PM 12-21-2009 5-25-51 PM

    Go ahead and give a cluster administration name for the cluster, and after that select a disk for the shared storage. for this we’ll use a disk created in the SAN storage,

    12-21-2009 5-26-21 PM 12-21-2009 5-27-07 PM 12-21-2009 5-29-49 PM

    11. Once that step is completed you’ll be back in the cluster management console. Now you’ll be able to see the cluster server name you’re created. So we have created cluster but still we didn’t share any storage. Now we’ll go ahead and create shared folder an assign few files so users can see them,

    12-21-2009 5-35-20 PM 12-21-2009 5-35-57 PM 12-21-2009 5-37-16 PM

    12-21-2009 5-39-01 PM 12-21-2009 5-40-23 PM

     Now once we login from a client PC we can type the UNC path and access the shared data in the clustered file server :)

    12-21-2009 5-54-10 PM 12-21-2009 5-55-24 PM

    Phew…!! that was a long article I' have every written :) Ok I guess by now you’ll have the idea Windows 2008 clustering is not very complicated if you have the right tools and the resources. Now that is the out layer internally to secure the environment we’ll need to consider about either CHAP authentications, IPSec…etc. Since this is 101 article i kept everything is simple manner.

    Let me know your comments (good or bad)about the article so I’ll be able to provide better information which will be helpful for you all.