Monday, December 21, 2009

FIX: Avaya Patch to Enable Caller ID on XO IP Flex (Tandem Calls)

I wrote an article here a while back on a product by XO called IP Flex. In it I listed an issue we were having with caller ID on tandem calls (call forward & EC500). I was notified by my Avaya Business Partner, Sunturn, that Avaya released a fix for this issue that is included in Communication Manager 5.2.1. I have since upgraded to this release and, after enabling the recommended 'Special Applications', we are able to pass caller ID on tandem calls.

The special applications are as follows:

SA8870 (Page 6)
SA8931 (Page 7)

Thursday, December 10, 2009

How A Programmer Views His Day

I give credit to Aaron Hansen for this post. He is one of the software engineers at MetaSource (the company I work for) and the author of the content below. I know it describes him to a T, but it seems to apply to a few other programmers I know also.

Day day = new Day(DateTime.Now);
        Thread.Sleep(new TimeSpan(8,0,0));
catch(YouTubeException yte)
    day.Wasted = true;
catch(Exception ex)
    day.Sick = true;

Tuesday, December 1, 2009

Go Green: Replace Your VOIP PBX With a Traditional PBX

I was looking at our PoE switches the other day and noticed that our PoE VOIP phones (Avaya one-X Deskphone 1616s) are using between 2.5 and 5.5 Watts of electricity with an average of 4.68 Watts for the 25 phone sample. This is the average with no one on the phones. Seems high but we are really not talking about a lot of money, here are my calculations:

4.68 X 25 = 117 / 1000 = .117 X 24 = 2.808 X 365 = 1,024.92 kWh per year

In Hawaii at 20.8 cents per kWh, the cost is $213.18 per year. In Massachusetts at 18.17 cents, the cost is $186.23 per year. In Utah at 7.07 cents, the cost is $72.46 per year. In the U.S. on average at 10.3 cents, the cost is $105.57 per year.

Also, you have to figure that even with a traditional PBX the phone draws some power. Here is where it gets difficult. I could find data for analog phones easy enough. Analog phones draw around 5 mA when on-hook and around 21 mA when off-hook with a maximum of around 120 mA at around 48 VDC. However, most PBX systems now use digital telephone sets, and finding out any electrical information on these types of phones proved extremely difficult.

I searched online and I called a couple of phone manufacturers, but I still could not get a definitive answer. I am guessing that the newer digital sets with larger LCD displays draw more power, but I am going to make a huge assumption here until I can actually test a digital phone and say that the older digital phones probably draw about the same amount of power as analog phones. I am hoping that since the phones probably do not draw much power while on-hook that I am not underestimating on these older digital phones with limited or no displays. As soon as I can get my hands on a digital telephone, I plan to test it out. Please comment if you have any information. So, here is the calculation:

Assuming 2 hour average talk time per phone per day
.005 X 48 = .24 X 25 = 6 / 1000 = .006 X 22 = .132 X 365 = 48.18 kWh per year
.021 X 48 = 1.008 X 25 = 25.2 / 1000 = .0252 X 2 = .0504 X 365 = 18.4 kWh per year
(There would be a difference for the ring cycle on analog lines and maybe on digital, but I don't think it would be significant in most environments)
Total = 66.58 kWh per year

So, in the U.S. on average you would be paying $6.86 per year to operate the 25 traditional phones. Therefore, the difference between VOIP and traditional for 25 phones is, on average, in the U.S., about $100 dollars a year. However, you can make it sound a lot more ominous by saying that by running VOIP phones you are emitting more than half a metric ton more Carbon Dioxide into the atmosphere every year.

Here are some other "Green" benefits of using traditional phones in place of VOIP that are harder to quantify:

Less networking equipment
Potentially less Air Conditioner usage
Ability to use older copper wiring and not have to replace it with Cat 5e (although you can argue that by placing the network switches closer to the edge you are reducing the amount of cable needed)
Traditional PBXs may end up in a dumpster if replaced

I realize that I have made many assumptions here. For example, a call center would have the phones off-hook at least 8 to 12 hours per day and possibly 24 hours per day. Also, for the most part, you can run more VOIP phones per physical PBX than analog/digital telephones. I am assuming you are running a PBX that supports at least 25 VOIP or traditional phones or that the PBXs have similar power requirements aside from the phones. In addition, I am assuming that most VOIP phones will have similar power requirements to the Avaya 1616. I am sure there are many more I have missed.

I have seen numbers floating around that say it is 30 to 40 percent less expensive to operate traditional phones than VOIP phones, but nobody had any data to back that up.

Hey, I am not saying don't buy a VOIP PBX. We have replaced our traditional PBXs. Who knows what the deal is with global warming anyway.

Monday, November 9, 2009

Browser Market Share Among IT Professionals - A Statistically Insignificant & Questionable Analysis

I realize as I write this post that I am officially a geek. I not only have an IT blog, but I am using Google Analytics to create reports and track how many people are visiting the blog, where they live, what type of internet service they have, and what browser they are using. It is this last bit of evidence on browsers that made me think to write this post.

I know that my sample is not large enough to be significant (around 270 unique visitors), and I know there is no way to verify that the people visiting are IT professionals. But, I was surprised at the percentages of users using non-IE browsers. I knew that Firefox had taken a large share of the market, but I was surprised that Chrome and Safari both had about a 5% share in my sample. Here is the breakdown:

Comparing this to recent broswer market share stats here, I found that my stats showed a greater loss (> 10% more) for IE. Based on this information and personal experience at the company I work for, I am going to postulate that IT professionals are switching browsers at a much faster rate than less technical users.

Ok, obvious right? But, the real question I began asking myself is what are the reasons that peole switch or don't switch. Why do people prefer IE, Chrome, Firefox, or Safari? If there really is a "Best" browser, what is keeping everyone from switching?

Well, I don't have all the answers, but I can at least explain why where I work only IT users use browsers other than IE.

We participate in around 6 security audits every year. All of these audits require patch management (preferably automated), approved software lists, vulnerability assessments, etc. Why does this matter? Well, I haven't done extensive research, but I am not aware of any free way to automate the deployment of patches from a central server and receive reports on the results of these deployments for non-IE browsers. I am not saying that this necessarily makes IE more secure or a better solution, but with Windows Server Update Services (WSUS) and WSUS reporting it makes it much easier to provide evidence for audits.

Combine the deployment and reporting issues with the fact that many websites both external and internal (third-party web-based apps) are not written to be compatible with all browsers. Add the fact that more browsers (or versions/brands of software in general) means more training and increased support calls, and it makes more sense to only allow one browser.

However, software developers need to be able to test web applications for compatibility with multiple browsers, and support personnel may need to support users from home that may use unapproved browsers. Therefore, IT personnel are allowed (approved) to install and use non-IE browsers.

So, what I am wondering now is if this is true for other organizations, what is the true browser market share?



Saturday, November 7, 2009

IIS Makes Website Redirection Easy

I have found many uses for the redirection functionality that is built into IIS over the years and I am surprised by how many times I come across people using redirect files or other methods to redirect people to new sites or to redirect people to SSL.

Microsoft has some good information here on how to use redirection in IIS. However, I thought I would give some real world examples and provide some screenshots. You will also want to check out this reference to see all of the available parameters. OK, let's get started.

Redirect to SSL

I use IIS redirection the most to redirect regular web traffic to SSL. Here is how:

Example: We want to redirect to SSL. [Site] will be used as a placeholder for whatever you want to call the website in IIS.

Step 1: Set up website called "Redirect [Site] to SSL" (This website will only run on port 80 and will not be assigned a server certificate. You may have to add a host header or assign a specific IP to this site if you are hosting multiple sites.)

Step 2: Now, go to the properties of the site you just created, select the 'Home Directory' tab, select the 'A Redirection to a URL' radio button, under 'Redirect to:' enter ''. For this particular example, you can ignore the check boxes below. However, you may want to use one based on your need and security requirements so do some research.

Step 3: Set up website called [Site] and assign it a certificate so that it will accept SSL connections. (You will either need to add a fake host header, I use 'asdf', or change the TCP port to another number, e.g. 8080 or 1234, so that the sites will not interfere with each other. TCP port is not needed on this site since we want the 'Redirect [Site] to SSL' to respond for HTTP (port 80) traffic.)


Redirect to SSL Exchange 2003 & 2007

Step 1: Follow directions for 'Redirect to SSL', but instead of entering '' in step 2 enter '' (Exchange 2003) or '' (Exchange 2007).

Problem: If someone enters '', no redirection will take place because the traffic is not hitting the redirect site.

Solution: Go to the 'Properties' of the root of your [Site] (the one running SSL) and select the 'Home Directory' tab. Select the 'A Redirection to a URL' radio button, under 'Redirect to:' enter '' (Exchange 2003) or '' (Exchange 2007). Then, check the check box 'A directory below URL entered'. (If you do not check this box, it will apply redirection to all of the sub-directories also. You do not want that.)


Redirect and Preserve Suffix and Parameters (Query String) - Redirection Parameters

Example: You want to redirect '' and '' to '', but you want to make sure that '' or '' redirect to ''. You want to make sure that the redirection preserves the suffix and the parameters

How: Assuming the sites are already set up go to 'Properties' of the site that responds for '' and '', select 'Home Directory' tab, select the 'A Redirection to a URL' radio button, under 'Redirect to:' enter '$S$Q', check 'The exact URL entered above'. When using 'Redirection Parameters', you may (not always) need to check the check box 'The exact URL entered above'. In this example, the box must be checked. $S and $Q are 'Redirection Parameters'.

I won't provide any more examples here, but see the 'Redirect Reference' here to find out more. There are a few more useful parameters and it explains how to use wildcards for more advanced redirection.

Tuesday, November 3, 2009

XO IP Flex - True Shared Voice & Data - SIP for the Masses

Alright, I know the title makes me look like an XO salesperson or an advertiser, but I really just wanted to write an informative piece about the service. I am just a customer and overall I am pretty satisfied with the service. I will include the positives and the negatives of the service.

For those unfamiliar with the XO IP Flex product, it is a shared data/voice offering that uses SIP trunking between an XO-owned, on-premise router and the XO VOIP switch (BroadSoft). The on-premise router then provides an analog or TDM hand-off to the customer.

So, here is how it works. You can order the service in four different configurations 1 T1 (1.54 Mb/Up to 16 voice channels), 2 T1s (~3 Mb/Up to 32 voice channels), 3 T1s (~4.5 Mb/Up to 48 voice channels), & 10Mb/Up to 72 voice channels (Not sure how many T1s here, but we have a 10 Mb ethernet circuit and the number depends on the distance from CO). When you order, you choose how many voice channels you want provisioned (can change amount later up to max), and then you choose if you want them to hand off as a PRI, digital trunk, analog lines, or some combination.

Now, this isn't like traditional shared T1 service where some channels are dedicated to voice and the remainder are data. The service here is basically a Cisco router that provides QoS for voice traffic. Which means that if you are not using any voice, you are not losing any data bandwidth.

Here is what is great about this setup. You can use any existing TDM phone system that supports PRI, digital trunks, and/or analog lines, or you can use regular analog phones (XO can provide the PBX functionality). At the same time, you can still take advantage of the cost benefits of VOIP. Your phone system won't even know you are using VOIP. You also get free QoS for voice traffic between the router and the XO VOIP switch. No need to buy expensive equipment to provide quality voice.

Some of the other benefits include:

- Free calling between any of your XO IP Flex locations (Why not? It is never leaving the XO network. See full data sheet linked to below for more info and restrictions.)
- Buckets of minutes and even Enterprise buckets (shared among locations)
- Cheap rates if you exceed bucket
- Free 800 number
- Lots of DIDs (telephone numbers) included
- If you choose to just use analog phones, or if your PBX is not very functional, you can administer the XO PBX from the business portal to provide PBX services. You can even purchase add-on services for enterprise-class PBX features.
- Full list of features here.

Alright, now for the negatives. First, XO IPFlex cannot provide a SIP hand-off to the customer. Direct SIP trunking requires you to order a different product from XO. Second, it is pretty darn close, but I don't think it is 100% as reliable as traditional TDM or analog voice service. The final negative I am aware of is that, at least with Avaya Communication Manager, you cannot send caller ID that is not a DID assigned to your XO account.

The main problem with not being able to pass any caller ID is with call forwarding or extension to cellular type calling (UPDATE: Avaya has fixed this issue. See here). Usually phone systems will pass the originating callers information on so you know who is calling, however, the calls are dropped in this scenario. Now, they could fix this issue at any time (UPDATE: Avaya has fixed this issue. See here), and you may not want caller ID so you can have XO hardcode your BTN for all calls. Also, it may not be a problem with other PBX systems, and you can always add some translations in your PBX to pass a DID assigned to your system like the primary business telephone number (BTN). So, I guess what I am trying to say is that you should do some research if this is the only thing holding you back.

One last thing, if you are using the XO portal as the PBX, I believe they offer forwarding and extension to cellular type services (may cost extra) that would most likely not experience the caller ID issues.

OK, so now for my conclusion. The IP Flex service is not for everyone, but if you own a non-VOIP or hybrid PBX or if you don't own a PBX, you may want to check it out. You may save some money and who doesn't like that.

I know there are other carriers that have similar services. Feel free to comment if you have a particular service you have used and are happy (or unhappy) with. Also, if you have any positives or negatives to add, let me know.

Monday, November 2, 2009

High CPU - dwwin.exe or dumprep.exe - Terminal Services - Windows Error Reporting

If you experience sudden slowdowns for your terminal services users and in your research you see the dwwin.exe or dumprep.exe is hogging the processor, disable windows error reporting. In fact, I think it is best to disable this on all terminal servers (and even all machines), unless needed, through a GPO. See for ways to disable error reporting.

When error reporting is enabled, an application crash spawns a process that collects information to send to Microsoft and/or for internal use. This process can really slow down your machine and in a shared environment like terminal services can have devastating effects. Luckily, it is easy to disable.

Preferred Hardware for Virtualization Part II - Configurations & Performance

Now that you have the specs for the Dell 2900 III and the Dell T610 servers, I wanted to give some examples of configurations I use in my environment and some performance or utilization statistics (real stats from past month). Both of my configurations here will be for the 2900. The first configuration I am going to give is the configuration running the greatest number of virtual hosts. The second configuration is the server that runs half (about 32) of our Thin Clients in our production data entry facility (See post here.) and half of our servers for that location. To make it easier, I will use Retail prices for any pricing info I give, but understand that with volume licensing programs and a Dell account you may be able to get significant discounts (For example, open business licensing on datacenter edition without hyper-v is around $2,150. The servers below can get down around $6,500).

Configuration 1
Dell 2900
2 X 2.66 Ghz Processors (Intel 5355 - Quad-core)
8 X 400 GB 10k SAS drives (Raid 10 - 1.44 GB usable space)
4 X Gb NICs (Two built-in, 1 Add-in)
Total Server Cost - ~$8,500
2 X Windows DataCenter Edition Processor Licenses - 2 X $2,971
1 X VMWare ESX Standard 2 Proc. (could have used foundation) - $2,568

Total Cost for Unlimited Use - ~$17,000
Cost per Virtual Host - ~$810 (Remeber includes all server licensing costs)
Number of Virtual Hosts - 21

Server Breakdown
1 X Domain Controller
2 X Processing Box
1 X Terminal Server (Approx 10 Thin Client Users)
1 X Virtual Center Server
6 X Web Servers
2 X Backup Servers
4 X Personal Computers (Still Server Edition to take advantage of unlimited licensing)
1 X Email Archival
1 X Video Conferencing
1 X FTP Server
1 X File Server

CPU Utilization (in Mhz)
10,646 (Almost 1 full processor)
4,606 (2 Cores)

Memory (Percentage)

Disk Usage (in KBps - should support more than 40k KBps)

Network Utilization (in KBps)
42,224 (Proof that gigabit is not always required for VMWare)

So, as you can see here, we overbuilt this server. It was one of our first servers. We could have gone with one processor which would have saved ~$3,700 (1 Datacenter Proc + 1 CPU). This would have brought the system cost to ~$13,300 and then per virtual machine cost to just over $600. Plus, there is still room for growth.

Now for configuration 2. I was surprised to find that running client sessions through terminal services takes up much more processing power than most servers. It is a great way to use up extra processing power.

Configuration 2
Dell 2900 III
1 X 2.83 Ghz (Intel 5440 - Quad-core)
8 X 450 GB 15k SAS drives (Raid 6 - ~2 TB Usable Space limited by Raid controller)
4 X Gb NICs (Two built-in, 1 Add-in)
Total Server Cost - ~$8,000
1 X Windows DataCenter Edition Processor Licenses - $2,971
1 X VMWare ESX Foundation 2 Proc. (may offer single processor now) - $1,889

Total Cost for Unlimited Use - ~$12,860
Cost per Virtual Host - ~$1,286 (Remeber includes all server licensing costs)
Number of Virtual Hosts - 10

At first look, we are paying a lot more per host with this configuration, but remember there are 32 clients running on the terminal servers so even though the cost is higher we are gaining a lot of value.

Server Breakdown
1 X Domain Controller
1 X SQL Server
1 X Processing Box
1 X Security Controller
2 X File Server
2 X FTP Server (One for stateless Thin Client logins)
2 X Terminal Servers

CPU Utilization (in Mhz)
6,291 (Little more than 1/2 full processor. Turn off error reporting or you will see huge spikes in CPU when apps crash. See here.)
2,627 (1 core)

Memory (Percentage)
63 (Terminal server sessions can eat up memory if the users are using memory intensive apps like Outlook and some browsers. Make sure you test this to get an idea how much each session will use)

Disk Usage (in KBps)
66,320 (Biggest offender SQL averages over 21,000)
13,672 (Terminal servers average under 4)

Network Utilization (in KBps)
6,444 (Proof that gigabit is not always required for VMWare)

Preferred Hardware for Virtualization - Dell 2900 III & Dell T610

I understand that not everyone is going to agree with this post. I don't really expect them to. When designing a system there are many variables, and the best solution will not always be the same solution. However, in my current position, I have done the analysis many times for different projects and for some reason, I almost always come back to the same hardware. So, I figured I would share the reasons why I usually end up purchasing these servers for my virtualization needs.

Obviously it all boils down to value and functionality. I feel that these servers deliver great performance at a great price and they are functional in ways that fit our needs. Ok, first I will discuss the downside to these servers so that I can explain why, for me, it doesn't matter. The downside is that these servers are a whopping 5U. Yeah, I know, that is a lot of rack space for a server these days. However, if you are just starting down the virtualization path, you will usually find that you have more than enough rack space, and if not, there are definitely other solutions that will save you the space.

Now, if you are familiar with Dell hardware, you may have overlooked these. In order to find them, you usually need to select tower servers and most medium to large organizations do not browse that area of the Dell website. Even though they are listed in the tower server section, both of these servers have optional rack configurations (Last time I checked, they took away the rack config for the 2900 III, but I am sure you can still get it if you have an account rep or call in).

Alright, so why am I listing two different models? The T610 is basically the replacement for the 2900 III and supports the latest Intel processors. The price is not much more than the 2900 so if you have the need for faster processing you may want to skip the 2900, but because there is a slight cost savings, because there is one nice feature that the T610 does not have, and because most people would be surprised how little processing power their servers actually need I have included it. So, what is the feature that has saved the 2900? Dell calls it a flex bay. Basically, the flex bay allows you to fit two extra drives in the server that are either mirrored or striped. These drives can be used to run the OS so that you don't have to steal drives from the main Raid controller or partition a section of your array for the OS. Unfortunately, as of last time I checked, the T610 does not offer a flex bay.

Ok, I guess I should move on to the stuff that really matters. Here are the specs that make these servers my preferred choice for virtualization:

Dell 2900 III
Dual socket. Supports two dual-core (52xx) or quad-core (54xx) CPUs. When it comes to virtualization, I always go with the fastest processor with the greatest number of cores (within reason). The main reason for this is OS licensing as I will explain later.

12 DIMMs. That's right 12. So, you can add 48 GB (12 X 4GB) of memory to the server.

Hard Drives
Up to 8 X 3.5 inch SATA or SAS drives (One of my next posts is going to be titled, 'The Case for Direct-Attached Storage (DAS)'. I will link to it here when I am done.). I know that some people will argue that 2.5 inch drives are faster, but if the 3.5 inch drives are fast enough, as of today, they are still cheaper and are available in larger capacities. What does all this mean? Well for under $5,800 you can get 8 X 450 GB 15k SAS drives, giving you around 3.5 terrabytes of raw storage (the Raid controller is limited so you will not be able to use all of it, but the next step down is 300 GB 15k drives (2.34 TB raw) and it only saves you a few hundred dollars)

Flex Bay
Two more for a total of 10 3.5 inch SATA or SAS drives (Note: these cannot be added to the 8 drive array)

Expansion Slots
No compact PCI or riser slots here. 6 expansion slots. 1 x8 PCI Express – x8 lane with x8 connector, 3 x4 PCI Express – x4 lane with x8 connector, 2 x 64-bit/133MHz PCI-X – supports full-height, full-length 3.3v PCI or PCI-X cards

Full specs here.

Dell T610
Dual socket. Supports two quad-core (55xx) CPUs.

12 DIMMs (RDIMM or UDIMM). That's right 12. But, this model is slightly different in that you can only use 6 DIMMs per processor. However, you can get 8 GB sticks of RAM. So, for a single processor you can have up to 48 GB and for two processors up to 96 GB. If you are cost concious though, you will probably stick to the 4 GB sticks which cuts those numbers in half.

Hard Drives
Up to 8 X 3.5 inch SATA or SAS drives or Up to 8 X 2.5 inch SATA, SAS, or Solid State drives.

No Flex Bay option

Expansion Slots
No compact PCI or riser cards here. 5 PCIe Gen2 slots (Two full-height, full-length x8. Three full-height, half-length x4).

Full specs here.

Up Next: Part II - Configurations & Utilization

Thursday, October 29, 2009

Seriously Dell, a BIOS Upgrade to Install a Second Processor

Most of the time, I am a big fan of Dell. I have used Dell servers for years and they work great and support is usually pretty good. However, I feel like I need to vent a little this morning about how ridiculously difficult it was to add a 2nd processor to our VMWare ESX server that is running on a Dell PowerEdge 2900.

So, after a failed attempt by one of our IT personnel to add the processor a few weekends ago, I was tasked with the job of adding the processor. Here are the details of what I was told happened on the first attempt:

- Processor was added
- Server booted
- RAID array went into degraded state
- VMWare dies loading and goes to maintenance shell
- Call Dell
- Download currently installed VMWare version
- Run re-install/repair on VMWare installation
- Try a bunch of other stuff
- Remove old processor from Slot 1
- Put new processor in Slot 1
- Server boots
- VMWare loads successfully
- RAID array eventually recovers from degraded state
- Dell sends motherboard

Alright, so now I have to install a new motherboard. Sounds exciting (sarcasm). So, I happen to be traveling to the location the next week so I schedule some time to come in in the wee hours of the morning to replace the motherboard. Today was that morning.

So, I get here about 4:30am and realize I don't have a key to the building. Oh, I have a badge that gets me in, but a lot of good that does when the door is physically locked. Anyway, I digress, I was able to get in the building and so I set about preparing for the motherboard replacement.

But, hold on, what is this error on the LED display ( E1118 CPU Temp Fail )? Hmm, maybe I should check that out. First hit on google, is this:

The post by YngwieJ sounds promising (thanks by the way). I happen to have motherboard BIOS 2.2.6. What a coincidence? So, I download the Redhat version of the BIOS upgrade 2.6.1, perform the upgrade (Oh yeah, I have to remove the CPU to get it to boot up because I had tried to add the old CPU to slot 2 since that had not been attempted yet. Long story, based on some other information we found online about processor stepping models.). So, the upgrade wasn't so hard, and once it finishes, I cross my fingers, boot up the server, the server boots successfully, and VMWare loads successfully.

Wow, seriously Dell, I cannot add a second processor without upgrading the BIOS on my motherboard. Maybe you should add a little checkbox to your testing form that says, ___ Add Second Processor. If you already have this checkbox, is it checked? If so, then you may need to look at other solutions.

Well, I feel a little better now. I am glad I get to ship the motherboard back!

Tuesday, October 27, 2009

Why Cisco? Part II HP Procurve Alternative to Cisco Switches

I just discussed why I like Fortinet FortiGate firewalls over the Cisco PIX or ASA. Now, I thought I would spend some time discussing HP Procurve switches.

Like I said in my previous post, I don't have anything against Cisco. I still manage a bunch of Cisco routers. I was Cisco certified (CCNA & CCNP, I let them expire). I think Cisco has a great product line and there is no doubt they are the leader when it comes to networking gear.

However, for the last 4 years, I have been managing and purchasing HP Procurve switches and I must say, I like them a lot.

Like the Fortinet FortiGate, HP Procurve switches are easy to configure/manage. For configuration, they have a CLI, a DOS-type menu, and a web interface. I personally like the CLI and menu system, but the web interface is great also.

The HP Procurve line also has some of the cheapest per-port prices on managed switches I have seen. They have great modular switches and the 5400 series has models with gigabit and POE-capability on every port.

The icing on the cake though is the lifetime warranty. I know it is hard to believe, but it is true. As long as you own the device, HP will send you replacements for hardware failure. We had some 8-10 year old Procurve switches at the City of Provo (we replaced most of them eventually in order to get higher throughput) and any time a module or port went out (not very often) Procurve sent a replacement.

To be fair, Cisco might offer some advanced features in their switch lineup that the Procurve cannot compete with. Also, there is some advanced licensing that you need to purchase on some of the Procurves to unlock some of the more advanced routing capabilities. However, for the most part I think the Procurve is an alternative to Cisco switches.

What types of switches do you use? Do you like them? Why or why not? What are some features that Cisco offers in their switch lineup that I should check out?

Why Cisco? Fortinet FortiGate Alternative to Cisco PIX or Cisco ASA

I don't really have anything against Cisco. I think they have a great product line. I even passed the CCNA and CCNP exams back in 2002. However, about 4 years ago when I was working for the City of Provo, we were looking for a new firewall and I was tasked to do the reasearch. It was at this time that I was introduced to the Fortinet FortiGate (all-in-one, multi-threat, unified threat management (UTM), or whatever they are calling these devices now) firewall.

We ended up purchasing two of these devices for the city and set them up in an active-passive distributed cluster (The firewall is partitioned into two virtual firewalls. One network runs on one unit, and the other runs on the secondary unit. However, if either device should fail, the traffic will switch to the other unit). I have since deployed and manage 5 different sets of clustered FortiGate firewalls for another company and I feel like I need to share how much I like using these devices. I am not saying they are perfect, but what device is?

Let me just briefly touch on some of the features offered:

Stateful Firewall
Web Filtering
IDS/IPS (Could be easier to manage)
Network Anti-virus
Anti-Spam (One of the areas in which these devices could be improved)
High Availability (Active-Active or Active-Passive Clusters)

For full features and specs, visit

Now, to be fair, there are a few areas in which the product could be improved. As mentioned above, the IDS/IPS functionality could be a little easier to configure and the Anti-Spam could have more options. Also, the logging and reporting is all there, but could be improved. However, even discounting the device for these issues it is still an amazing value.

Coming from a Cisco IOS background, it was difficult at first to get used to the fact that you can configure 90-95% of the firewall through the web interface (not that you have to, there is a CLI). However, the web interface is great, and makes managing and training IT staff on the use of the firewall much easier.

I think that I am most impressed with the High-Availability features. Not necessarily how well high-availability works, though it does work well, as much as how easy it is to cluster the devices. The configuration is straightforward, can be done through the web interface, and connecting the devices is a breeze.

The IPSec VPN is standards based and I myself have successfully connected to Cisco, Checkpoint, and SonicWall VPN devices. The SSL VPN is great and runs in both IE and Firefox. They even have clients that allow you to run the SSL VPN in Linux and on Mac OSX.

There are options for authenticating users to determine what web filtering, IDS/IPS, Netowrk AV, etc. (Called Protection Profile) gets applied. This authentication can happen seemlessly with an Active Directory extension, or the user can be required to log in to a webform using Radius, LDAP, or local authentication.

If you haven't heard of Fortinet before, check them out. I highly recommend the product. Do you use Cisco? If so, what are some reasons I should give the Cisco PIX or ASA another shot? If not, what do you use and how do you like it? I would love to hear from others on this topic.

SQL Server Security Using Active Directory - Windows Authentication

The final post in my series dealing with only assigning rights directly to resources once deals with SQL Server security. My preferred method of assigning rights in SQL Server is very similar to my method for objects. However, database security and NTFS security are quite a bit different so I still feel the need to explain it.

First of all, I create 'Domain Local' groups (If I remember correctly, your domain needs to be at a certain functional level in order to use 'Domain Local' groups. So, if you have any problems assigning 'Domain Local' groups, you may want to check out what functional level you are at.) for all of the server roles as follows:


Now, even though you won't have very many people in any of these roles, I still follow my rule of never adding users to 'Domain Local' groups. I always create 'Global' groups based on job roles.

Next, I create groups for database level roles that will be applied to all databases as follows:


These groups will then be given access to the corresponding database role in each database (e.g. PROVO-SQL1[_Instance].AllDatabases.Read will be assigned the db_datareader role). You will notice that I have added an AllDatabases.Execute. There is no db_executor role in SQL 2005, but it is easy enough to create one. Here is how I choose to accomplish this:

CREATE ROLE db_executor
GRANT EXECUTE TO db_executor (or GRANT EXECUTE on schema::dbo TO db_executor if you choose to grant at the schema level)
exec sp_addrolemember 'db_executor','YourUser'

I must give credit to the following posts for this informaiton.

The last set of groups created are specific to the database (could be specific to the schema if you wanted to break it out further) and are as follows:


We could get a lot more detailed here and add additional roles and schemas, but I think that this is a good enough explanation of the concept and the additional roles/schemas could be accounted for with additional descriptive groups.

The server role groups only need to be added to SQL once. The AllDatabases groups can be added to the model database to take care of any new databases. We use a custom stored procedure we created to add the database groups to any newly created databases.

Restricted Groups - Click Here
Object Rights Assignment - Click Here

Monday, October 26, 2009

Active Directory Security - Rights Assignment - Permissions for Shares and Folders

In my previous posts, I explained how much it bothers me to have to assign rights to a resource more than once. I already spoke about about assigning rights to computers and servers here. Now, I would like to discuss my preferred method of assigning rights to objects (files/folders/shares/etc.).

The first thing I do when I am assigning rights to a folder is create a series of 'Domain Local' groups (If I remember correctly, your domain needs to be at a certain functional level in order to use 'Domain Local' groups. So, if you have any problems assigning 'Domain Local' groups, you may want to check out what functional level you are at.) for each type of permission I think I will ever grant to the object (Use Domain Local groups because they can contain Global groups from any Domain). Then, I assign 'Roles' ('Global' groups) to the appropriate 'Domain Local' groups in order to grant access to the object. Here is an example:

For this example, I am setting permissions on the 'Software' folder (happens to be a share also) on a the server PROVO-FS1.

I start by creating at a minimum the following 'Domain Local' groups:


You can extend this to other more specific permissions if you feel you will use them (e.g. PROVO-FS1.Software.ListFolder). You can also use these same groups to add permissions to the share. Just make sure you assign the appropriate share permission for the groups as there is not a 1-1 relationship between share and NTFS permissions.

Then, you add these rights to the 'Software' Share/Folder. Make sure you review the default rights and remove groups that should not have access. You can remove all groups, but some prefer to leave the local administrator and/or 'Domain Admins' accounts. The main concern here is to make sure you do not, unknowingly, leave ACLs that will grant users higher privileges than they would otherwise gain through these new groups.

Once you are done creating the 'Domain Local' groups, you can add 'Global' groups (Roles) to the 'Domain Local' groups. (e.g. you may want to add the 'Global' group 'Information Systems Users' to PROVO-FS1.Software.Modify or add the 'Global' group 'Software Developers' to PROVO-FS1.Software.Read.

Now, here are some rules that must be followed (Ok, they are rules that I made up, but I like to follow them).

Rule #1 - 'Domain Local' groups can never include 'User' objects. You can only assign 'Roles' ('Global' groups) to 'Domain Local' groups. Then, you assign 'Users' to 'Roles' ('Global' groups).

Rule #2 - Do not use the 'Full' (Full Control) group unless you want the group you are assigning here to be able to modify security permissions (very few people). The 'Modify' group has all of the rights anyone should need to manipulate, rename, delete, etc. Full Control is by far the most misused permission and it is a very bad security practice to grant this permission to users other than system administrators or business owners. However, if you are using the method of assigning rights explained in this post, you can easily delegate control of the groups to business owners or develop an application that allows them to modify group membership. So, you would not grant 'Full' to the business owner.

You may also want to extend the validation of rights further up the tree. For example, you may have the following folders under the software folder:


In this case, you would create more 'Domain Local' groups for the subfolders as follows:



Then, you would disable inheriting on these folders and assign the groups with the appropriate permissions to the appropriate folders.

For subfolders, you can, if needed, add the subfolder's 'Domain Local' group to an appropriate parent folder 'Domain Local' group. In this example, you could add all of these groups to PROVO-FS1.Software.ListFolder. Now, you are ready to assign roles to the 'Domain Local' groups. You could give many roles access to the PROVO-FS1.Software.Approved.Read, but you may only want to add 'Technical Support Users' to PROVO-FS1.Software.Unapproved.Read.

You should never need to adjust permissions at the resource again (Unless you decide you need to add an additional permission set, then you create the group and add it).

Previous: Restricted Groups
Up Next: Active Directory SQL Server Security

Friday, October 23, 2009

Active Directory Group Policy Restricted Groups

This article is not a tutorial on how to create and use 'Restricted' groups, but mainly a commentary on why I use them and also some design concepts that I use in my environment. If you would like more information on how to create 'Restricted' groups, see these tutorials:

Restricted groups is a section of group policy that allows you to set permissions on (add users and groups to) the local groups (e.g. Administrators, Power Users, Remote Desktop Users, etc.) on a resource (Computer/Server) that the policy applies to.

I use restricted groups as part of my quest to eliminate having to change security permissions on resources at the resource. Another benefit of using restricted groups is that it will reapply these settings every time group policy refreshes. So, if someone is able to escalate their privileges by adding themselves to the Administrators group on the local machine, the next time group policy refreshes it will remove them from the group (you will still need to monitor for this type of activity as they will be able to escalate their privileges for an entire logon session, or indefinitely, if they can find a way to stop group policy from refreshing).

If you refer back to my post on Active Directory Structure here, you will see that for each location I have an OU called 'Restricted Security'. In this folder, are all of the restricted groups that I create for a location. These groups are 'Domain Local' groups (If I remember correctly, your domain needs to be at a certain functional level in order to use 'Domain Local' groups. So, if you have any problems assigning 'Domain Local' groups, you may want to check out what functional level you are at. Any time I am assigning groups to a resource (file/folder, computer, database, etc.) I use 'Domain Local' groups. This is because 'Domain Local' groups can contain groups from other domains while 'Global' groups cannot.). The following is a sample of the types of groups I have in the 'Restricted Groups' OU. I will use the placeholders [Location] and [Department] to show that I have groups for each location, department and location/department combo.

Under the OU HazarInc Groups > Enterprise > Restricted Security (If you have know clue what I am talking about here, please see my previous post here) you will find the following 'Restricted' gorups:

Restricted Admins - Local Administators on any machine in the domain
Restricted Power Users - Local Power Users on any machine in the domain
Restricted Remote Desktop Users - Local Remote Desktop Users on any machine in the domain

Under the OU HazarInc Groups > [Location] > Restricted Security you will find the following:

Restricted [Location] Admins - Local Administators on any machine in [Location]
Restricted [Location] Power Users - Local Power Users on any machine in [Location]
Restricted [Location] Remote Desktop Users - Local Remote Deskotp Users on any machine in [Location]
Restricted [Location] [Department] Admins - Local Administators on any machine in [Department] at [Location]
Restricted [Location] [Department] Power Users - Local Power Users on any machine in [Department] at [Location]
Restricted [Location] [Department] Remote Desktop Users - Local Remote Desktop Users on any machine in [Department] at [Location]

Now, here is a rule that must be followed (Ok, it is a rule that I made up, but I like to follow it).

'Domain Local' groups can never include 'User' objects. You can only assign 'Roles' ('Global' groups) to 'Domain Local' groups. Then, you assign 'Users' to 'Roles' ('Global' groups). (e.g. If users in 'Provo' that are in the 'Human Resource' department, need to be 'Power Users' you would add them to the 'Global' group 'Provo Human Resource Users' and then add that group to 'Restricted Provo Human Resource Power Users'. So, this means they would be 'Power Users' on any machine in 'Provo' that is assigned to the 'Human Resource' departments OU. Now, to filter that so that they can only log on to their machine we use an Active Directory add-on called 'LimitLogin' that was created by Microsoft and is free to use, but this is a topic for another day.

Alright, now you create group policy objects at the company/domain level, the location level, and the department level. Please refer to the tutorials above for information on creation of these policies. Also, the restricted group policy object is not additive. Meaning, if you have a policy at the company/domain level, that contains groups for the local 'Administrators' group. You will need to add these same groups at the location level and the department level. I will give an example below of how I set the groups at each level for the 'Administrators' group and then the 'Power Users' group.

Company/Domain Level Adminstrators Group
Administrator - I make sure I assign the default groups and users
[DOMAIN]\Domain Admins - I make sure I assign the default groups and users
[DOMAIN]\Restricted Admins

Location Level Administrators Group
Administrator - I make sure I assign the default groups and users
[DOMAIN]\Domain Admins - I make sure I assign the default groups and users
[DOMAIN]\Restricted Admins
[DOMAIN]\Restricted [Location] Admins

Department Level Administrators Group
Administrator - I make sure I assign the default groups and users
Domain Admins - I make sure I assign the default groups and users
[DOMAIN]\Restricted Admins
[DOMAIN]\Restricted [Location] Admins
[DOMAIN]\Restricted [Location] [Department] Admins

Company/Domain Level Power Users Group
[DOMAIN]\Restricted Power Users

Location Level Power Users Group
[DOMAIN]\Restricted Power Users
[DOMAIN]\Restricted [Location] Power Users

Department Level Power Users Group
[DOMAIN]\Restricted Power Users
[DOMAIN]\Restricted [Location] Power Users
[DOMAIN]\Restricted [Location] [Department] Power Users

Up Next: Active Directory Security - Rights Assignment - Permissions for Shares and Folders
Also: SQL Server Security Using Active Directory

The Method to My Madness Part II - Active Directory Standards & Role-Based Access

Nothing bothers me more than having to assign rights to a resource more than once. In my effort to not be bothered, I have come up with ways to accomplish this for computer rights, file/folder permissions, and SQL server security. Now obviously you have to update rights to a resource, but what I am talking about when I say that I hate having to do it more than once, is the act of assigning rights directly to the resource. I would rather be able to add someone to a group that is based on a role or job function and be done. I don't want to have to browse out to folders and files and assign permissions. I don't want to have to make someone a Power User by actually changing the rights on the computer. Lastly, I don't want to give someone rights to a database by going to the database server and assigning rights to his/her user object.

In and effort to achieve these goals, I have established some rules for assigning rights to objects, computers, and databases. Utilizing Restricted Groups for computers, a combination of Domain Local and Global Security Groups for objects, and groups/roles for SQL server. I will break these up into three separate posts.

1. Restricted Groups - Click Here
2. Object Rights Assignment - Click Here
3. SQL Rights Assignment - Click Here

Wednesday, October 21, 2009

The Method to My Madness - Active Directory Structure

I get a hard time at work because I am very picky about the way Active Directory is structured in our environment. What can I say, I like doing things a certain way. Now don't get me wrong, I am open to doing things differently and changing my ways, but there has to be a very good reason. For this reason, I am publishing my preferred structure for all to see and critique. I hope to improve upon this structure by getting feedback from as many people as possible. I will try (as much as possible) to explain my thought process and reasoning. When there is a comment for an OU, I will give it a number and explain below.

I do not really change any of the default active directory folders and OUs. I leave them as is, but I do not add any new objects to any of these folders if at all possible (Except for the 'Domain Controllers' OU). Here is what the rest of my structure looks like. I will use an asterisk to denote a sublevel with one asterisk being a root-level OU. For my example, I will use the fictitious name HazarInc with locations in Provo, Utah, Scottsdale, Arizona, and Guayaquil, Ecuador. I use [type] as a placeholder, for example, when you see 'Computers [type]', I am trying to convey that there could be an OU for a specific type of Computer (e.g. 'Computers Removable Media Allowed'). The following are all Organizational Units.

        * HazarInc - 1
        *        * Provo - 2
        *        *        * Human Resources - 3
        *        *        *        * Computers
        *        *        *        * Computers [type] - 4
        *        *        *        * Users
        *        *        *        * Users [type]
        *        *        * Information Systems
        *        *        *        * Computers
        *        *        *        * Users
        *        *        * Sales
        *        *        *        * Computers
        *        *        *        * Users
        *        *        * Operations
        *        *        *        * Computers
        *        *        *        * Users
        *        * Scottsdale
        *        *        * Human Resources
        *        *        *        * Computers
        *        *        *        * Computers [type]
        *        *        *        * Users
        *        *        *        * Users [type]
        *        *        * Information Systems
        *        *        *        * Computers
        *        *        *        * Users
        *        *        * Sales
        *        *        *        * Computers
        *        *        *        * Users
        *        *        * Operations
        *        *        *        * Computers
        *        *        *        * Users
        *        * Guayaquil
        *        *        * Human Resources
        *        *        *        * Computers
        *        *        *        * Computers [type]
        *        *        *        * Users
        *        *        *        * Users [type]
        *        *        * Information Systems
        *        *        *        * Computers
        *        *        *        * Users
        *        *        * Sales
        *        *        *        * Computers
        *        *        *        * Users
        *        *        * Operations
        *        *        *        * Computers
        *        *        *        * Users
        * HazarInc Email - 5
        *        * Provo
        *        *        * Contacts
        *        *        *        * [type] - 6
        *        *        * Distribution Groups
        *        *        * Resources
        *        * Scottsdale
        *        *        * Contacts
        *        *        * Distribution Groups
        *        *        * Resources
        *        * Guayaquil
        *        *        * Contacts
        *        *        * Distribution Groups
        *        *        * Resources
        *        * Enterprise - 7
        *        *        * Contacts
        *        *        * Distribution Groups
        *        *        * Resources
        * HazarInc Groups - 8
        *        * Provo
        *        *        * Domain Local Security - 9
        *        *        * Global Security - 10
        *        *        * Universal Security - 11
        *        *        * Group Policy Security - 12
        *        *        * SQL Security - 13
        *        *        * Firewall Security - 14
        *        *        * Restriced Security - 15
        *        * Scottsdale
        *        *        * Domain Local Security
        *        *        * Global Security
        *        *        * Universal Security
        *        *        * Group Policy Security
        *        *        * SQL Security
        *        *        * Firewall Security
        *        *        * Restriced Security
        *        * Guayaquil
        *        *        * Domain Local Security
        *        *        * Global Security
        *        *        * Universal Security
        *        *        * Group Policy Security
        *        *        * SQL Security
        *        *        * Firewall Security
        *        *        * Restriced Security
        *        * Enterprise - 16
        *        *        * Global Security
        *        *        * Universal Security
        *        *        * Group Policy Security
        *        *        * Firewall Security
  *        * Restricted Security
        * HazarInc Servers
        *        * Provo - 17
        *        *        * Web - 18
        *        *        * Terminal - 19
        *        *        * SQL - 20
        *        *        * File - 21
        *        * Scottsdale
        *        *        * Web
        *        *        * Terminal
        *        *        * SQL
        *        *        * File
        *        * Guayaquil
        *        *        * Web
        *        *        * Terminal
        *        *        * SQL
        *        *        * File
        * HazarInc Service Accounts - 22
        *        * Provo
        *        *        * Users
        *        *        * Users [type]
        *        * Scottsdale
        *        *        * Users
        *        * Guayaquil
        *        *        * Users
        *        * Enterprise
        *        *        * Users

1 - Any group policy you want to apply to the entire company can be applied here with the exception of policies that are required to be set on the 'Domain Controllers' OU and policies that are required to be set at the 'Domain Level'. I use other policies at the domain level as a catch-all to make sure some policy gets applied to objects that are improperly put in the default folders.

2 - Any group policy you want to apply to the location (e.g. location specific login scripts or location specific update servers)

3 - I put all of my computer and user policies here to cover all OUs that are not a specific [type]

4 - Very specific group policies that override the defaults inherited from above

5 - I like having a separate location for all of my email objects so they don't get mixed in with the rest

6 - If you have a lot of contacts, you may want to use subgroups. This applies for other OUs also

7 - Sometimes these types of objects do not really correspond to a location

8 - I do not like my groups mixed with other objects

9 - Look for information on 'Domain Local' groups and when I use them in an upcoming post

10 - Look for information on 'Global' groups and when I use them in an upcoming post

11 - Look for information on 'Universal' groups and when I use them in an upcoming post

12 - Groups created to filter group policy to specific groups - rarely used as most policies can be separated by OU. Most common use for me is software installation policy

13 - Look for information on 'SQL Security' groups (Domain Local groups) that are used specifically to assign rights to SQL Server in an upcoming post

14 - Groups used in our Fortinet FortiGate firewalls - I will have a post about Fortinet in the near future

15 - Look for information on 'Restricted' groups in an upcoming post - Here's the link

16 - Some groups do not correlate to a location

17 - Apply policies that are required for servers in this location (e.g. update server)

18 - Apply policies specific to Web servers

19 - Apply policies specific to Terminal servers

20 - Apply policies specific to SQL servers

21 - Apply policies specific to File servers

22 - Every security professional's nightmare, the dreaded service account. I like to keep my eye on these so I keep them separate from the other users

Up Next: The Method to My Madness Part II - Active Directory Standards & Role-Based Access

SOLVED: Windows Storage Server + NAS + NFS + VMWare ESX = Problems

This post is for those out there that are pulling their hair out trying to get a Windows Storage Server NAS device to play nice with VMWare. I set up connections to our NAS device a while ago, but had only used the connections sparingly. Everything seemed to be working fine. However, the other day I decided to set up and test Microsoft DPM.

The problem with DPM is that you need a raw disk that it can format and set up as a protection group. I better get back on topic because if not this entry might end up being a rant on what I don't like about DPM. So, I set up a VM, installed DPM, and created a large vmdk on the NAS datastore to use for my protection group. This is where my troubles began.

First of all, it took me about 5 or 6 tries to even create the vmdk (I guess that was my first warning). Then, when I finally got everything installed, I went to create the protection group and the NFS service locked up on the NAS. All of my datastores went into an 'inaccessible' state, which is awesome for machines with the system drive on the NAS because it shuts them down. The service does not repsond to a restart request or any attempts to stop, start, kill processes, etc. so the only option was to reboot the NAS. Yeah, so I went through this process a couple times before I realized there was no way this was going to work.

Well, after getting discouraged, I turned to the Internet for solutions. Surprisingly, it was not very easy to find solutions to this problem (Unless you count "Use Linux" as a valid solution). Finally, I stumbled upon a hotfix issued by Microsoft that seemed to address the issue. I applied the hotfix and it still did not work. However, after re-reading the hotfix, checking all of the registry settings, linking to another hotfix, installing that hotfix, and fixing those registry entries. I am happy to say everything has been running smoothly for over a week.

Here are the links to the hotfixes:

You will need to install both and don't be fooled by the fact that it states that some of the registry entries are set for you by the hotfix as I found this to not be the case. Verify every registry entry it references. Also, the hotfixes will require a reboot in order for the system to pick up the changes.

One of the hotfixes references network connectivity loss as a reason you may be experiencing the failures so, just in case, check your network equipment for errors and make sure your cabling is ok.

Tuesday, October 20, 2009

Kendal Van Dyke: Things You Need To Know If You Use DFS Replication

Kendal Van Dyke: Things You Need To Know If You Use DFS Replication

Here is a great post that has information on some potential problems with DFS replication. Not that you shouldn't use it, but if you do decide to use it check it out.

Monday, October 19, 2009

DFS - Rename Replication Group

I have seen a lot of posts about renaming repliation groups that state that there is no way to rename your replication group. I was able to rename a replication group by using ADSI Edit (adsiedit.msc). Here is the method I used:

Open adsiedit.msc on one of you domain controllers, expand the top-level 'Domain' tree, expand CN=System, expand CN=DFSR-GlobalSettings, right-click CN=[replication group name], select 'Rename', change the name to CN=[new replication group name], expand CN=[new replication group name] and check to make sure that all child objects are using the new name.

Once the change is made you may received some errors while the changes are replicated throughout your network, but eventually it will all sync back up. At least it did for me.

P.S. - Please be careful using the ADSI Edit utility as you can corrupt your Active Directory configuration by changing the values incorrectly. I suggest you test the changes in development before deploying to production.

Thin Clients, I Never Thought They Would Work

So, about 6-8 months ago I started looking for a way to convince executive management to replace aging desktop computers in our data entry facility in Tijuana, Mexico. At the same time, we were involved in numerous security audits with our customers and were being pressured to come up with more secure solutions for our data entry facilities.

I had looked at thin clients in the past, but I always thought that they would not provide the performance necessary for high-speed data entry applications. However, after pricing out replacement desktops and factoring in the resources it was taking to maintain the aging hardware in our TJ facility, I was able to convince management to test the thin client option.

The thin client we chose for our test was the HP T5145. We chose this model because it is the cheapest offering ( < $200) we could find so we thought it would be the easiest model to obtain a payback on.

The HP T5145 is a linux-based thin client that allows RDP and Citrix type connections. Other than that the capabilities are pretty limited. We purchased a thin client for testing, and were surprised to find that with some tweaking they performed fast enough in our data entry application. Now it is possible that newer desktops may have performed slightly faster, but as we started looking at the security and cost saving benefits of thin clients we made the decision to go ahead with the Thin Clients anyway.

Now whenever I read articles about cost savings from thin clients, people always love to point out the energy savings. While there are definitely energy savings to be had, I find this type of cost savings is more marketing and propaganda than anything else. I see the cost savings coming into play more in the replacement cycle of the hardware and a lighter administrative burden. Let me explain.

Most companies refresh their hardware every 3-4 years. There are various reasons for this and I won't go into them here. There are different numbers out there, but because thin clients do not have any moving parts most people agree they should last anywhere between 6 and 10 years. So, not only are they cheaper by less than half (Possibly more than half once you add terminal server/citrix licensing), they do not need to be replaced near as often. Now what about adminitration?

Our current setup supports 64 Thin Clients at our TJ production facility, and we run Terminal Services for these clients. However, instead of dedicating hardware to these terminal servers, we run the terminal servers as VMs on existing VMWare servers that run our servers also. We have 4 virtual machines (2 on each physical host) along with all of the other file servers, SQL servers, etc. that are needed for our production facility. If you license your Virtual Server with Datacenter edition processor licenses, it doesn't matter how many virtual machines you create on that physical machine. So, although you could argue it cost us more for the extra resources in the server that allowed us to run these machines, I find the cost is negligible as processing power and memory are so cheap right now that the main cost is in the storage. These servers do not use much storage space. Why does all this matter?

Well, now that you understand our setup, here is what I believe is the best part of the thin client solution. Instead of maintaining 64 client computers, we only maintain 4. For example, we used group policy to deploy the .NET framework 3.5 before the update came out through WSUS. Unfortunately, the method that Microsoft gave of deploying the product through Group Policy was convoluted and when they finally released the update through WSUS it would not install on any of the machines that had received the update through Group Policy. Moreover, the solution was to use the .NET removal/cleanup tool to remove all versions of .NET and then reinstall the .NET versions you needed to reinstall. Ah, if it was only that simple. We found that not all machines would reinstall with the same procedure so it changed from an easily automated task to a manual task (we eventually did find a solution that worked on the majority of machines, but for the sake of arguement lets say we didn't). It took on average 1.5 hours per machine to perform the process to fix the .NET install problem. However, instead of having to perform this procedure on 64 machines, we only had to do it on 4 (or we could have done it on 1 and cloned the machine 3 times to recreate the other terminal servers). I will let you extrapolate from here on other cost savings that may be had with this configuration. Now, what about security?

The thin clients can be run in stateless mode, which means they can retrieve their configuration from an FTP server based on a user logon. In this mode, the user cannot change any settings on the thin client (the administrator can change the configuration through simple changes in the xml file on the FTP server). So, we combine this with Group Policy to lock down the server (no usb or other personal storage redirection), a dedicated VLAN for the thin clients that only allows RDP access to the terminal servers, firewall rules that whitelist access to the Internet from the Terminal Servers (no file uploads to untrusted sources), and the setup is pretty secure. Now, I am not naive enough to say it is perfect. There are always exploits that you miss or new exploits that present themselves later, but it is so much better than what we had before.

So, now we have 64 thin clients in our TJ Production facility (data entry), 10 in our Utah production facility (data entry and miscellaneous use), and 25 in our Pennsylvania production facility (call center). You definitely need to test all necessary user applications over terminal services (Direct X does not work in terminal services (may work with Citrix), we cannot use our high-speed scanners with thin clients because they require SCSI cards, and certain applications require a console session. Also, test different resolutions - some of our applications that require faster response times work the best at 1024 X 768. Fortunately, that is the resolution they are programmed to work at), but if you can find a user group that thin clients work for, I highly recommend it.

DFS Namespaces are Super Cool

As a follow up to my last post, another benefit of off-site storage using DFS-R is the ability to use namespaces to allow the different locations quick access to the files they need on the file server closest to them.

DFS Namesspaces are virtual paths that use Active Directory's sites and services feature to determine which file server is closest to you and connect you to that server without having to specify the servers name in the path.

For example, if you have a domain ( with two servers at two different locations participating in DFS replication (FileServerA and FileServerB). You can set up a namespace (FileServer). Then, you can write your login scripts to connect to \\\FileServer\Share and if you are in location A your computer will connect to FileServerA and if you are in location B your computer will connect to FileServerB. Say you travel back and forth between locations. All changes you make to a file at location A will replicate back to location B once you close the file so that when you return to that location you have not lost any data. Since DFS-R uses byte-level replication, it is not transfering much data so the delay between closing the file and replication is minimal.

There is always the chance that a conflict will arise by two people accessing the file at the same time and you may lose some changes, but the administrator should be able to look up and resolve these conflicts and hopefully they will be few and far between. However, this is one downside to DFS that should be considered. It would be great if they could add some functionality to the process that would handle these situations better and not require administrator intervention. I am sure Microsoft would tell you to buy Sharepoint.

You will need to make sure your sites are set up correctly in Active Directory, but it works great.

Distributed File System Replication (DFS-R) and Volume Shadow Copy (VSS) for Backups

So, we have been doing an some testing on a new backup solution at our company and I wanted to see if anyone had any input. So far the testing has gone well, but I wanted to make sure we are not missing anything before we implement this in our enterprise.

We wanted off-site backups without having to carry the physical media off-site, but still wanted version control. So, what I decided to do was replicate our file servers with DFS-R and use VSS to provide version control at the primary location.

My thought process is that off-site storage of backups and version control are separate processes that get lumped together only because most backup vendors provide both in the same package. However, they really fulfill two different requirements. The ability to restore a file to a point in time (Version Control), and the ability to recover from a major disaster or hardware failure (Off-site Backup).

I started by setting up DFS-R internally at our primary site from one of our file servers to a Windows Storage Server 2003 R2 NAS. Then, I added a server at an alternate location to the replication group that would replicate from the NAS device during non-peak hours. Both locations have fairly high-speed internet connections (10 Mbps), so conceptually we are able to replicate just under 66 GB of data overnight (assuming 8 to 5 work day). This ignores the compression and byte-level replication aspects of DFS so instead of actual rates being less than conceptual rates we can replicate at close to or significantly more than conceptual rates depending on what types of files are on the server.

So, this takes care of disaster recovery, but leaves us in a world of hurt if we accidently delete some files, a file is accidently overwritten, or a file gets corrupted. This is where VSS comes into play. If we set up volume shadow copy, we can recover from deleted or changed files. Now, I know that people will complain because you could lose your versioning if you lose your server, but for most people the cost benefits of this backup solution should outweigh this negative. Also, I have come up with a few other ways to guard against this loss. The first way only works in a virtual environment, but if you are running virtual servers, you can set up your VSS volume on a disk located on a NAS or SAN device that does not host your server disks. Another solution would be to implement VSS in two locations. The last solution would be to use traditional backup technology to back up your VSS volume (You would not have to take this off-site).

I really like this solution for our backups and so far it has been very low maintenance. There are some concerns with the stability of VSS, for example, I have read it can be wiped out by disk defragmenting. However, I have had many issues with traditional backups also so I find that for me the risks are outweighed by the benefits. Besides, spending zero money on backup software makes me happy (Ok, zero money is stretching the truth, I am still working on a solution for SQL Server backups and Exchange).

Up Next: DFS Namespaces