Friday, July 5, 2013

New Widespread Android Vulnerability

Very interested to find out how you can modify android applications without changing the cryptographic signature.  Feeling pretty happy I bought the Nexus 4 so I don't have to wait for my device manufacturer to create and test a patch.

Saturday, June 1, 2013

SANS SEC504 in Salt Lake City August 21st, 2013

SANS SEC504 - Hacker Techniques, Exploits, and Incident Handling

I will be mentoring this course in Salt Lake City starting August 21st.

Mentor courses allow you to learn SANS course material locally in multi-week class sessions. This allows you more time to learn and master the same material taught at the SANS six-day Conference events.  It also helps avoid travel and time away from work.

Class sizes are small, usually under 12 students, which allows you one-on-one assistance during the sessions.

Full details and discount codes below:

Experience SANS training right here in Salt Lake City with a training course from the SANS Mentor Program - local, live training without the travel. ===================================================================== Enter Promo Code: DRIVE13 when registering to receive 15% off course tuition. ===================================================================== Class Starts: August 21, 6:30-8:30PM. Class will meet over 10 Wednesday evenings.
Tuition: $3077 if you register by June 30th using discount code DRIVE13.
Full Schedule and Details at:
Instructor: Mentor David Hazar

In SEC504 you will learn:
- The tactics used by computer attackers.
- The latest attack vectors and how to stop them.
- Proactive and reactive defenses for each stage of an attack.
- Strategies and tools for detecting each type of attack.
- Attacks and defenses for Windows, Unix, switches, routers and other systems.
- Application-level vulnerabilities, attacks, and defenses.
- How to develop an incident handling process and prepare a team for battle.
- Legal issues in incident handling.
- How to recover from computer attacks and restore systems for business.

Wednesday, March 23, 2011

You Might Not Want to Disable Opportunistic Locking If . . .

1. If you programmatically access large flat-file databases hosted on a file share in Windows 2003, you might not want to disable opportunistic locking ( OpLocks ).
2. OK, I guess I only have one confirmed reason right now.

We use Melissa Data COM objects and databases to provide some of the address validation for our customers. One of our developers was making some changes to our processes and noticed that it was taking over a minute for his program to initialize the Canadian address database that was hosted on our production server ( Windows 2003 Enterprise R2 Server x86 ). Usually, this process takes between 1 and 3 seconds. So, I started troubleshooting the issue.

First, I ran Process Monitor to look at the registry, file system, and network access and make sure there wasn't anything abnormal with the communication. We found some issues with missing config files, but it turns out that the files are not required.

Second, I tried copying the database files to another server and accessing them on that server. Once this change was in place, the database initialized in 1-3 seconds as expected. Interesting . . . So, I then started evaluating the differences between the servers and found that one was 64-bit and the other was 32-bit. I also noticed that the servers were on different physical servers. Anyway, I tried this on a few different servers including one that was the same version and patch level, on the same physical host, and using the same physical network adapter. Every server I copied the database files to worked flawlessly and the database initialized in 1-3 seconds.

Third, I ran a wireshark trace and compared the network traffic going to the production server vs. the network traffic to the other servers that were working as expected. When accessing the production server with the issue I found that the communication between the hosts included over 44,000 SMB packets. Most of these packets were only 512 bytes. When accessing the other servers, the communication between the hosts included less than 900 SMB packets and the majority of these packets were 32,768 bytes. Interesting . . . Why was there so much more SMB traffic to the production server?

Fourth, I decided to test adding a new virtual hard drive to the production server. I made this decision because I was still unsure what the problem was, and since we format the drive on our production server with a smaller block size, I wanted to make sure that this difference was not a factor. It wasn't.

Fifth (OK, so to be completely honest there were a few more steps, but they are not worth mentioning here), I had obviously been searching online for possible solution, but I had not found anything that I thought would make a difference. However, after much searching, I found that many posts and articles dealing with SMB referenced registry entries under lanmanserver/parameters.

I compared the registry values under the registry key lanmanserver/parameters on a server that was working with the production server. I found that there was a value "EnableOpLocks" on the production server with a value of "0" that did not exist on the other server. After reading about oplocks and how they affected SMB traffic, I decided to test removing this entry which would re-enable the opportunistic locking on that server. Once this change was made and the production server was rebooted, everything worked as expected and the behavior was the same as the other servers.

Thursday, December 16, 2010

IISReset and There Go My Changes

Have you ever made changes in IIS 6 and then issued an IIS reset through the command-line or through the GUI and lost all the changes you just made? Oh come on, I can't be the only one. Anyway, apparently IIS 6 caches all of your changes and then writes them to disk automatically at some interval (about 5 minutes according to my testing). So, even if there are websites out there (and there are) that say that the metabase is flushed to disk when you issue a reset through the Internet Information Services Manager (IIS Manager), my testing has proven otherwise. Is it possible that this behavior is specific to my environment? Sure, but I have run the tests on multiple web servers (two different domains and one that was not part of a domain) with the same results.

So, what do you do to make sure this doesn't happen? It is actually very easy. Righ-click the web server in IIS Manager, select "All Tasks", and select "Save Configuration to Disk" before you click "Restart IIS . . .". Or, if you want to do it from the command line, run "cscript.exe %SYSTEMROOT%\system32\iiscnfg.vbs /save" before you run IISReset.

The other solution would be to restart all of the services without using IISReset. See this KB article for more details.

Wednesday, October 13, 2010

Curse of the GrubUpdate: Upgrading from VMWare ESX 3.5 to vSphere 4 - The Solution

My last post was a diatribe about the horrible support experience that I had with VMWare on this issue. It provided the solution, but I figured I would write a more pointed and detailed explanation.

The errors we were getting when trying to upgrade one of our VMWare ESX 3.5 hosts to VMWare vSphere 4 were as follow:

Error in Host Update Utility:
Grub update failed

Error in vua.log:
grub> find /esx4-upgrade/vmlinuz
Error 15: File not found
info: END grub output
error: grub cannot find root hd number

After many months of working with VMWare on this issue, I still did not have a good explanation of what the grubupdate process was or what might be causing it to fail. I got sick of constantly attempting the upgrade process at the request of VMWare even though there had been no change or very insignificant changes to the system. So, I started to look at the grub files more closely and compare them to servers that upgrade successfully.

The first attempt I made to correct the issue was to re-install ESX 3.5 while maintaining the existing datastores. I did this because I did not have a /var/log partition. I just had a /var partition with a log folder. The reason I thought this might be the problem is that the vSphere 4.0 upgrade always creates a /var/log partition for the ESX 3.5 failover install that you can use to boot 3.5. Anyway, this did not fix the problem.

After some more research, I noticed that all of my other servers that had been successfully upgraded had the following line in the grub.conf:

kernel /vmlinuz-version ro root=/dev/sda2

The server that was failing had the following line:

kernel /vmlinuz-version ro root=/dev/sda7

Well, I noticed sda2 on the upgraded servers was a primary partition and sda7 on the failing server was an extended partition. I hypothsized that vSphere 4 requires you to have your system partition on a primary partition. Once again, I re-installed 3.5 (maintaining the existing datastores) making sure that I installed the boot and system partitions as primary partitions and then the upgrade was successful.

If my hypothesis is true (just because it worked for me does not totally confirm my hypothesis), I cannot believe that this is not documented in the upgrade docs and that tech support was not able to help me find a solution. Anyway, I said enough about that in my previous post.

Curse of the GrubUpdate: Upgrading from VMWare ESX 3.5 to vSphere 4 - The Experience

So, for the last 4 months my team and I have been working with VMWare to find a solution to an error we were receiving upgrading from ESX 3.5 to vSphere 4. Every single time we ran the update, which thanks to VMWare was like 15 times, we got to 24% right after the ISO file finishes uploading and the status would change to "Running grubupdate . . ." and the installation would fail. This is the error I saw in the logs:

grub> find /esx4-upgrade/vmlinuz

Error 15: File not found
info: END grub output
error: grub cannot find root hd number

You can read about the solution here. Or, you can wade through my diatribe on VMWare support below.

So, after some thorough troubleshooting we submitted a ticket to VMWare. Let me preface this by saying that we have upgraded a bunch of our VMWare ESX 3.5 hosts to vSphere 4.0 without any problems. I really like VMWare's products and have at times received decent support from them. However, the past 4 months I feel like I have been living in the twilight zone.

For the first month, we were asked to try the upgrade again by countless support reps as our request was passed around. I even had one rep call me to ask me for information on the problems I was having upgrading my Windows 2003 Virtual Machine (Seriously, did you even read the ticket?). Anyway, after about three attempts to upgrade without reason, I refused to attempt another upgrade until they offered some type of fix that made sense.

Wait 1 month . . .

Finally they got back to me and said that the BIOS version of our server was not supported (even though they admitted our other server that had successfully upgraded had a much older BIOS version). Anyway, I gave it a shot and it didn't work.

Wait another month . . .

After this I was frustrated so I even tried re-installing 3.5 preserving the existing datastores and the upgrade still failed. Then, VMWare said I had a corrupt partition table. I deleted and re-created datastore partitions and reinstalled so that I had re-created every partition on the server and still no luck.

You may ask at this point why I didn't just blow the machine away and start over. Well, lets just say it wasn't an option. We had some production machines on the server and no space anywhere else to put them. So, in deleting and creating partitions I was constantly jockeying these virtual machines around.

Anyway, I kept troubleshooting on my own because VMWare finally came back and said, let us know when you can get your production data off the server so we can fix the partition table because it could destroy all of your data. Finally, I stumbled accross what seemed to me like a probable solution.

It was simple actually. My grub.conf file was pointing at an extended partition instead of a primary partition. I was able to free up some space and a primary partition, reinstall esx 3.5 (preserving the existing vmfs datastores) with the boot and system partitions as primary partitions, and successfully upgrade the host.

So, after one of the worst (sadly not the worst) support experiences of my life. We finally have finished upgrading all of our hosts at this location. I will post a shorter, more detailed solution and link to it here in case people don't want to read my entire rant.

Friday, October 8, 2010

Thin Clients & Terminal Servers - What to look out for or what are the stand-out issues?

I posted an answer on LinkedIn in response to a question and figured it would make an OK post. The question was "Have you ever done a Thin Client Implementation? What are the stand-out issues?".

In our thin client implementation, we used really cheap HP thin clients ~$185 and Microsoft Terminal Services (Read about it here. I think thin clients work well if you have a large amount of users that use the same applications (at least in a terminal server/citrix environment). VMWare VDI may support users with more varied requirements, but licensing on that was a little unclear when we did the analysis.

We currently run over 200 data entry personnel on thin clients (one application that uses very few resources so it is an ideal application for thin clients). We run another 150 call center agents on thin clients also. These users need more resources because they run some web-enabled applications that require more memory and processing power.

I agree with the comments above (I won't steal anyones thunder so if you want to see others answers you can search LinkedIn), but would add that you should disable Windows Error Reporting in any shared Windows environment. This article explains this and has links to configuration documentation. If you ever need it for debugging, you can always re-enable it.

Also, make sure you customize your group policy and login scripts for the terminal servers. You need to trim them down as much as you can because if you have a lot of users logging in at the same time, it can be pretty slow.

Make sure your helpdesk is trained on how to quickly identify what possible causes of slowdowns might be. Many times it is just a program with a memory leak or stuck on some process that is slowing down the entire server. If you can quickly identify the user and have them shut down the offending process, you can avoid too many complaints. Also, be proactive and set up performance logging and alerts to notify you of high utilization on the servers.

Finally, make sure your machines are protected (firewall, antivirus, IDS/IPS, etc.). I have spoken to others that have lost entire citrix/terminal server farms to a virus outbreak. While you get the huge benefit of reduced administrative effort by only having to support a fraction of the machines, you also increase your risk if you lose one or many.

Oh yeah and no DirectX support at all and no microphone (client-to-server audio) with terminal services without third party add-ons.