Showing posts with label system administration. Show all posts
Showing posts with label system administration. Show all posts

Tuesday, January 27, 2015

The Imperfect Lab: Letting Additional Administrators Remotely Connect to Servers

An age-old server administration best practice is to make sure that everyone who is administering servers on your network are doing it with their own "admin" credentials.

Up until this point, I've done all my remote Azure sessions (PS-Session) with the built-in administrator account.  This works fine if you are only person connecting remotely to a server. But what if you want to grant others administrative rights to your machine and they would also like to connect remotely?

Your first step would likely be to add them to the local administrators group. Since you've already turned on the "remote management" feature for yourself, you might expect this to work out of the box.

But you probably overlooked this little note in the "Configure Remote Management" box when you enabled remote management - "Local Administrator accounts other than the built-in admin may not have rights to manage this computer remotely, even if remote management is enabled."

That would be your hint that some other force might be at work here.  Turns out that UAC is configured to filter out everyone except the built-in administrator for remote tasks.

A review of this TechNet information gives a little more detail:

"Local administrator accounts other than the built-in Administrator account may not have rights to manage a server remotely, even if remote management is enabled. The Remote User Account Control (UAC) LocalAccountTokenFilterPolicy registry setting must be configured to allow local accounts of the Administrators group other than the built-in administrator account to remotely manage the server."

To open up UAC to include everyone in your local Admins group for remote access, you'll need to make some registry changes.

Follow these steps to manually edit the registry:

  1. Click Start, type regedit in the Start Search box, and then click regedit.exe in the Programs list.
  2. Locate and then click the following registry subkey:
  3. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
  4. On the Edit menu, point to New, and then click DWORD Value.
  5. Type LocalAccountTokenFilterPolicy for the name of the DWORD, and then press ENTER.
  6. Right-click LocalAccountTokenFilterPolicy, and then click Modify.
  7. In the Value data box, type 1, and then click OK.
  8. Exit Registry Editor.

Now you will be able to remotely connect and administer your server using PowerShell with any account you've give Admin rights too for that particular server.  This would hold true for servers in Azure, as well as servers on your local network.

Special shout out to Bret Stateham for bringing this "remote admin road-bump" to my attention. Sometimes what looks like an "Azure" problem, is really a "Server" feature. :-)

Tuesday, December 16, 2014

The Imperfect Lab: A Few VM Manageability Tweaks

Today in the Imperfect Lab I'm going to work on some clean up to improve the manageability of my new domain controllers. Since I have two of them, I want to take advantage of the Azure's service level agreement.  The only way to ensure that Azure keeps at least one DC running at all times is to create an availability set, which will distribute the VMs within a set across different update and fault domains.

Some notes about Availability Sets - VMs must be in the same cloud service and you can have a maximum of 50 in each set. You will find that your machines are spread across 2 fault domains and upwards of 5 update domains.  Also, avoid creating a set with just one machine it, because once you create a set you won't get notifications about maintenance regarding those update/fault areas. 

Since my machines have already been created I use the following PowerShell to update them with a set named "ADDC".

Get-AzureVM -ServiceName "imperfectcore" -Name "dc-cloud1" |
    Set-AzureAvailabilitySet -AvailabilitySetName "ADDC" |
    Update-AzureVM

Get-AzureVM -ServiceName "imperfectcore" -Name "dc-cloud3" |
    Set-AzureAvailabilitySet -AvailabilitySetName "ADDC" |
    Update-AzureVM

If you want a quick gander at all the availability sets that exist in your subscription, run this:

(Get-AzureService).servicename | foreach {Get-AzureVM -ServiceName $_ } | select name,AvailabilitySetName

Since the GUI does hold a fond place in my heart, I do want the dashboard of Server Manager on one of the VMs to show the status of all the servers in the domain.  You'll notice that if you log into the desktop of one of these newly created servers the "Remote Management" will be disabled.  This needs to be enabled to allow management from other services, so run "winrm quickconfig -q" against each server to turn that on.  You will have to start a PS-Session for each server for that.

Finally, since I expect to reduce the amount of times I'm logging into a machine directly, I'm going to take switch one of the DCs to Server Core and the other to the MinShell format.  These commands do take a while to complete and require a restart to complete the configuration, so don't panic if you can't connect to what looks like "running" VMs in Azure for a few minutes after reboot.

For Server Core (from a Machine running the Full GUI):
Remove-WindowsFeature -name User-Interfaces-Infra
Restart-Computer -Force

For MinShell (from a Machine running the Full GUI):
Remove-WindowsFeature -name Server-GUI-Shell
Restart-Computer -Force

With the MinShell installation I will still have access to the nice Server Manager dashboard when I want it and will be able to remotely manage the 2nd domain controller from it.  The list below will show the differences between each of the versions. (Click to make it bigger!)




Monday, November 24, 2014

The Quest for the Perfect Lab

There are a few old sysadmin jokes out there... one that often comes to mind for me these days is the one-liner about how the perfect network is one that no one is on.  But now that I have the luxury of being able to build just about any lab network I want (either in Azure or using Hyper-V) I find myself nearly paralyzed by wanting to build the "perfect" network/lab for my needs.

I start, I stop, I get sidetracked by a different project, I come back to my plan, only to realize I've forgotten where I left off (or forgotten where I wrote down that fancy admin password for that VM) and end up tearing it out and starting over again.  The end result is I'm getting no where fast.

I've got several MCSE exams in my future that I need to build some things for hands on for.  I have a little internal metric of how I need to improve my PowerShell a bit more.  I have work training items that sort of fit into all this and I keep striving for the perfect lab, the perfect naming system, the perfect password that I won't forget... well, I guess my "perfectionist" is showing.

It's a slow week here in the office with the Thanksgiving holiday approaching, so now is the perfect time to sit down with a pen and a paper and really figure out what I'm going to build and what I want to use it for.

Because there is something worse than a network that no one uses.  It's that network I keep deleting.

Tuesday, January 14, 2014

Microsoft Top Support Issues: A Compilation

I know sometimes when I troubleshoot server issues, I feel like my issue is one of a kind. A special server snowflake. Though probably it isn’t.

Ever wonder what the most common support issues handled by Microsoft are?

Enjoy perusing the Top Support Solutions Blog! This might just save you some time when you are faced with that next head scratcher. Some of the product lines included (so far) are:
  • Exchange
  • Windows Server
  • Windows 8
  • Lync
  • System Center
  • SharePoint
  • SQL Server
Tips on AD replication. Update lists. DNS optimization. Client activation. STOP Errors. ActiveSync FAQ.  Outlook Anywhere.

There is even a Quick Start for upgrading Domain Controllers in domains with servers older than Server 2008.

This is the repository of tidbits that you are looking for. I started this blog as a place I could collect handy information for myself. I can now sleep peacefully at night knowing there’s something even better.

Go forth and troubleshoot!

Thursday, August 8, 2013

Home Tech Support: What happen to my picture thumbnails?

You know you have to do it.  Your parents, your sister, grandpop... they all ask for help with their computers when you are around.  So here's a quick one I got during my family vacation last week - The thumbnail view of a folder my father kept pictures in wasn't showing the photos anymore.

Somehow the setting for "Always show icons, never thumbnails" was selected in the Folder Options settings, under the View tab. 


I'm guessing an application change it, though I wouldn't be totally surprised to find out he was mucking around in there.

Another reason thumbnails might not show up is if the hard drive is mostly full.  Windows will stop generating thumbnails to save space.  That was the first thing I checked, but wasn't the issue in this case.



Wednesday, July 24, 2013

Sysadmin Appreciation Day Coming!

Don't forget that Sysadmin Appreciation Day is on Friday, July 26th!  If you are a sysadmin, this might be a great time to take advantage of a floating holiday you've been saving. 

If you AREN'T a sysadmin you might want to take a look around for your nearest helpdesk support person, the guy who last fixed your misbehaving keyboard, the person who helped you solve that problem with your voicemail, the gal who configured your mobile phone with corporate email, or the guy who restored that file you were looking for from last week's backup.

If you need ideas for how to celebrate or appreciate your neighborhood sysadmin, visit http://sysadminday.com/.

Friday, June 14, 2013

GPT, UEFI, MBR, oh my!

One of my first tasks in my new role is to get started building out my demo laptop. I was issued a nice workstation-grade Lenovo W530. It came preinstalled with the standard Microsoft Windows 8 Enterprise image. As my demo machine, I want a base OS of Server 2012 instead, so I set out wipe the machine and reinstall.

Since the preinstalled OS was Windows 8, the BIOS was configured for Secure Boot from UEFI Only devices. In addition, UEFI is required if you want to use GPT style disks instead of the legacy MBR style disks. So this Lenovo came out of the box configured with every modern bell and whistle.

First things first, I need the Lenovo to boot from USB. So to add that support, I jumped into the BIOS and went to the Boot menu under Startup.  It shows the list of boot devices in the list, but it's necessary to scroll down some to find the excluded items and add back in the appropriate USB HDD.

The next important decision is whether to install Windows Server 2012 on the GPT disk or use DISKPART to reconfigure it back to MBR. (The DISKPART commands to convert from GPT to MBR and vice-versa are readily available using your search engine of choice.) GPT supports larger disk sizes, but the solid-state disk in this machine isn't that large, so I could go either way. However, you need to know which you are doing because it determines how you set up your bootable USB and your BIOS.

If you are converting your disk from either MBR or GPT, this will wipe all your data. Make sure starting with a clean slate is REALLY what you want to do.  Also, while my goal is to install Server 2012, these settings and instructions would also apply if you are trying to install a different version of Windows 8.

For Lenovo, the BIOS settings need to go like this for GPT:
  • Secure Boot - Off
  • UEFI/Legacy Boot - UEFI Only
Also, your USB media NEEDS to be formatted FAT32. (This limits the size of a single file on the USB to 4GB, so watch the size of your image.wim file if you customize it.)

And like this for MBR:
  • Secure Boot - Off
  • UEFI/Legacy Boot - Both (with Legacy First)
Your USB media can be formatted NTFS, FAT32 isn't a requirement.

Take note, if you boot from NTFS media and try to install the OS on a GPT disk, you won't be able to select a partition to install to, you'll warned that you can't install to a GPT disk and have to cancel out of the installer.  Even if you are doing everything correctly from FAT32 media, you'll get a warning that the BIOS might not have the drivers to load the OS. This warning is safe to ignore - you can still continue through the install process and the setup will create all the necessary partitions to support GPT.

Once all my pre-reqs were sorted out, I reboot the machine and the Server 2012 install files start to load.  After I clicked INSTALL to get things going, I received an error message that read:

The product key entered does not match any of the Windows images available for installation. Enter a different product key.

Well, huh? Now granted, it's been a while since I've attempted to install a Server OS on a laptop, but I surely didn't miss a place to enter a product key! After some research I found this KB article, where it details the logic for locating product keys when installing Windows 8 and Windows Server 2012.

1.Answer file (Unattended file, EI.cfg, or PID.txt)
2.OA 3.0 product key in the BIOS/Firmware
3.Product key entry screen

Turns out the Lenovo has a preinstalled OEM license for Windows 8 Pro in the firmware. Seems that this saves OEM from having to put stickers on the bottoms of machines with software keys and ensures that the OEM licenses stays with the machine it was sold with. Enterprises that deploy images with another licensing model usually are using some kind of deployment tool and image with an answer file, allowing them to bypass the check against the firmware key.

For my scenario, I wanted the quickest easiest way to provide my key. Turns out the PID.txt file is a no-brainer. You can reference this (http://technet.microsoft.com/en-us/library/hh824952.aspx) for all the details, but all you need to do is create a text file called PID.txt with these two lines:

[PID]
Value=XXXXX-XXXXX-XXXXX-XXXXX-XXXXX


Put your product key in for the value and save it your \Sources folder of your install media. From there it was smooth sailing. After your OS is installed, feel free to turn back on the Secure Boot back in the BIOS.

Monday, May 6, 2013

Your Tier 1 Support is in the Wrong Place

Lots of us started there. Depending on the size of the company you work for, you might still be doing some of it.  Classic Tier 1 support calls are often things like password changes, mouse and keyboard issues, other things often resolved with the end user either rebooting their machine and logging out and back in.

And I'm almost certain that you have the wrong people handling that job, particularly if that person is you or someone one your team who is also responsible for other more technical projects. Stick with me on this for a minute.

I've always been a big advocate of the administration departments and the IT departments working closely together and I think that any administrative or executive assistant worth their salt can handle most Tier 1 Helpdesk tickets. Here's why: they already have their hand on the pulse of pretty major areas of your company and often work directly with executives and managers.

They know what guests and visitors are coming to your location - relevant IT tasks include providing WiFi passwords to guests, explaining how to use the phones and A/V systems and alerting IT ahead of time to guest that need additional resources.

They know when Execs are grumbling about IT issues that can become emergencies (noisy hard drives, problems with applications) and can let IT know ahead of time of pending maintenance issues. 

They can easily be cc'd on emails regarding upcoming password expiration for key executives or managers and make sure those people complete those tasks in a timely manner. Resetting passwords and unlocking accounts is a easy activity that can be delegated out to admin staff with a quick training session. With the proper permissions, you can only give them the abilities they need and nothing more.

Opening tickets, resetting voicemail passwords for phone systems, replacing batteries in wireless mice, swapping out broken keyboards, changing printer toners, basic troubleshooting of printer jams, updating job titles in Active Directory... That's just off the top of my head.

So what good could come of this? First off, there is a big lack of women in systems admin roles. I was just on a WiT panel last week discussing how to get more women into this role. Turns out, 3 out of 4 women on the panel started in administrative roles. It's a great way for someone to get a glimpse into the "plumbing" of how systems and network administration keep businesses running. 

Second, most executive assistants are great managers of time and of people, and can often see and understand the big picture of how a company runs, all characteristics that make successful sysadmins. Letting them handle some of the front facing issues can also take away some of the "mystery" of the IT department.

Integrating these two functions can provide a great cost savings to your company, can provide a pipeline of future staff to pull from when you have an opening in the IT department and as a bonus, you're doing something to help more women begin their technical careers.

So go ahead, steal the receptionist.

Thursday, October 20, 2011

Playing IT Fast and Loose

It's been a long time since I've been at work from dusk 'til dawn. I not saying that I'm the reason we have such fabulous uptime, there are a lot of factors that play into it. We've got a well rounded NetOps team, we try to buy decent hardware, we work to keep everything backed up and we don't screw with things when they are working. And we've been lucky for a long time.

It also helps that our business model doesn't require selling things to the public or answering to many external "customers".  Which puts us in the interesting position where its almost okay if we are down for a day or two, as long as we can get things back to pretty close to where they were before they went down. That also sets up to make some very interesting decisions come budget time. They aren't necessarily "wrong", but they can end up being awkward at times.

For example, we've been working over the last two years to virtualize our infrastructure. This makes lots of sense for us - our office space requirements are shrinking and our servers aren't heavily utilized individually, yet we tend to need lots of individual servers due to our line of business. When our virtualization project finally got rolling, we opted to us a small array of SAN devices from Lefthand (now HP).  We've always used Compaq/HP equipment, we've been very happy with the dependability of the physical hardware.  Hard drives are considered consumables and we do expect failures of those from time to time, but whole systems really biting the dust?  Not so much.

Because of all the factors I've mentioned, we made the decision to NOT mirror our SAN array. Or do any network RAID.  (That's right, you can pause for a moment while the IT gods strike me down.)  We opted for using all the space we could for data and weighed that against the odds of a failure that would destroyed the data on a SAN, rendering entire RAID 0 array useless.

Early this week, we came really close. We had a motherboard fail on one of the SANs, taking down our entire VM infrastructure. This included everything except the VoIP phone system and two major applications that have not yet been virtualized. We were down for about 18 hours total, which included one business day.

Granted, we spent the majority of our downtime waiting for parts from HP and planning for the ultimate worst - restoring everything from backup. While we may think highly of HP hardware overall, we don't think very highly of their 4-hour response windows on Sunday nights.  Ultimately, over 99% of the data on the SAN survived the hardware failure and the VMs popped back into action as soon as the SAN came back online. We only had to restore one non-production server from backup after the motherboard replacement.

Today, our upper management complemented us on how we handled the issue and was pleased with how quickly we got everything working again.

Do I recommend not having redundancy on your critical systems? Nope.

But if your company management fully understands and agrees to the risks related to certain budgeting decisions, then as a IT Pro your job is to simply do the best you can with what you have and clearly define the potential results of certain failure scenarios.  

Still, I'm thinking it might be a good time to hit Vegas, because Lady Luck was certainly on our side.

Wednesday, August 31, 2011

Remote Assistance in Windows 7

Today I had a random reason to use the built-in Remote Assistance features of Windows 7.  I was helping troubleshoot an issue with a vendor on a user's machine, using the user's session.  Here are some things I noticed about the Remote Assistance that differs from a regular Remote Desktop session.

  • Remote Assistance will give you a view of all the users screens with the full screen resolution.  In this case the end user had 3 monitors, so I had to expand my view the that machine across the majority of my 3 monitors in order for it to be usable.  Normally when you do a simple remote desktop session, all the applications and desktop icons from multiple monitors are fitted to one screen.  This may or may not annoy you, depending on how you like to work with remote systems.
  • Remote Assistance really assumes you have a person sitting at the computer.  As the remote support person, it's very easy to accidentally loose your rights to control the remote desktop by hitting Escape or Cntl-Escape.  You need the end user to re-authorize your request for control. (My end user used this troubleshooting time as an excuse to get coffee, so I had to run back to the desk to authorize that a few times.)
  • Remote Assistance blocks your ability to send email using the users email application, in this case, Outlook 2007. While I can see how this is good from a security standpoint, it was a hurdle when I wanted to use the email account to send some log files to the vendor.
The Remote Assistance features can certainly be handy depending on what a remote support person needs to be able to do on a user's workstation.  I'll probably use it again, but only when I've got someone sitting there to help with any control issues, since the whole point of using it is to save me from having to leave my desk!

Thursday, August 25, 2011

Tackling Windows 2003 Server Space Issues

Got a Windows 2003 server with a small hard drive that keeps filling up? Make sure to check out these potential space hogs:
  1. The Framework.log file in the %systemroot%\system32\wbem\logs folder. This file has the potential to grow out of control, but that problem can be easily remedied with a quick permissions change. Check out KB836605 for details.
  2. Some auditing and logging applications might be making backups of your Event Logs, which often end up in your %systemroot%\system32\config folder. Check for .EVT files you no longer need so you can move or delete them.

Finally, not sure what taking up the most space? Check out the free tool called WinDirStat for a quick visual mapping of what's taking up the most space.

Monday, June 27, 2011

IT Pros and Plastic: Being a Better Steward for the Environment

Last week for some of my “pleasure” reading, I read “Plastic: A Toxic Love Story”, by Susan Freinkel.  It was a pretty enlightening read and you might be wondering how this topic applies to you as an IT Professional.  I know we spend a lot of time dealing with intangible things in IT.  Virtual machines, the “Cloud”, bits and bytes and software and the physical things always seemed very metal-centric – we even talk about installing things from “bare metal”.

But if you stop and look around for just a moment – it’s probably more plastic than anything else.  Where are you reading this post from?  Your desk?  Your keyboard and monitor are plastic, your desk is probably even mostly plastic.  Your laptop is mostly plastic, or if you are using an e-reader it’s plastic too.  Just about any mobile device is in a plastic case these days.  You might be surrounding by CDs/DVDs and their cases – plastic.  Network cables – coated in plastic. Those swag items you have from that last conference – probably 99% plastic.

As IT Professionals, we rule a world of plastic.  And we need to be better stewards of the plastic that is in our control.  It’s so easy to see many of those plastic items as “throw away” – they’ve been designed that way.  Cheap swag pens, demo CDs, mobile devices replaced annually with the newest model, the list is pretty endless once you start looking around.  But really, plastic is for all practical purposes, forever.

So where to being?  First, take advantage of e-waste recycling programs that are in your area. Make sure that the electronic items that are no longer in use in your office have the best opportunity to be repurposed.  Second, consider your inventories of tech related “consumables” – make sure you are only buying what you need, so that items that have a shorter shelf-life don’t go into the trash unused.  Printer cartridges and smaller capacity storage media are things that come to mind.

Third, think about what you are buying for yourself and your family when it comes to popular consumer items.  I’m not saying you should deny yourself a new iPod or a better smart phone.  But think about options for your older devices before they languish in the back of your closet – many organizations take working cell phones to be given to abuse victims, and while you might not want last year’s iPod, someone shopping at Goodwill or some other thrift store might.

As I finished up my reading on my first generation Kindle, I realized that even though some of the newer models are sleeker and faster, what I have is probably good for now.

Friday, April 29, 2011

Adventures with at&t

Here's a story about how a company can have horrible customer service, yet have some wonderful customer service employees all at the same time.  It started over 2 years ago when some at&t representative showed up at our office to review our accounts and help us with our contracts.  Now, I'm no at&t contract expert.  That's why you have an account rep who does these things for you.  Seriously, telecom contracts are worse that Microsoft licensing. 

Anyway, over 2 years ago, it was suggested that we have an ABN account set up so we can get the most discounts, etc, based on our usage.  As I understood it, this ABN was like an umbrella account over all our other accounts (PRI, Long Distance, Internet) and we got credit for how much we spend or use.  There's a penalty charge if you don't use the amount of service you agree on in the contract.  We sign all the necessary paperwork and the representative heads off to get all these goodies set up.  We do our job by continuing to pay our at&t bills as usual.

A year later, I get a mysterious bill for $15,000.  A phone call brings to light that we didn't meet our "commitment" with the ABN contract, thus the penalty.  I thought this was odd and more digging brought to light that our pre-existing accounts were never brought under that ABN account we signed up for the year before.

I called our representative and found out they were no longer assigned to us.  A new representative, "Daniel", showed up to our office, reviewed everything and promised to resolve the issue, since it clearly wasn't our fault the accounts weren't put under this umbrella.  We were told not to pay the bill and we'd get credited as soon as it was sorted out.  That was almost a year ago.  Every few weeks, I attempt to follow up, only to be told "it's being worked on."  I've been trusting in at&t to resolve this. 

Moving on, last September we upgraded our Internet service, cancelling our old Frame Relay connection and putting in some nice fresh fiber.  Little did I know, this new account was properly linked to the ABN account.  An account that had a $15,000+ unpaid balance attached to it.  (Can you see where this is going?)

I still haven't heard anything definitive about our billing dispute and haven't had a real interaction with our "official" account representative, Daniel, in a long while.  All my contact was with a technical consultant, "Beth", that was working with my rep, but I digress.

Then in early March, our Internet connection mysteriously dies - at&t cut our service due to the non-payment of the ABN account.  Now, mind you, the account for the Internet service specifically has been paid for every month.  A few calls later to Beth and our Internet was back up.  Beth tells me not to worry, she'll contact billing and we'll get this resolved.  It won't happen again.

Then yesterday, it happens again.  I called Beth and got voice mail.  I left a message.  I called Daniel, got voice mail and left a message.  I called Daniel's boss and got voice mail.  Left a message.  I called the 800 number for at&t customer service and got "Patrick".  Patrick rocked.  He pulled up my account, looked at the ridiculous number of notes on it, muttered something under his breath about how crazy it was that I still had a ticket from June of 2010 and went to find a manager.  About a half hour later, I got a call from "Laverne", who managed to sort enough of it out to get our Internet turned back on. Laverne also rocks.

She couldn't fix the whole billing issue, but told me that it really needed to be handled by our account team.
I told her I knew that.  And that I've left several messages.  Clearly the phone company loves their voice mail features.

I tweeted about this fine event yesterday. I got a response (and a nice phone call) from "Troy" on at&t's team who's monitoring people who vent about at&t on social media venues.  Troy lso told me that he'd work on it and I'd have some more information by Monday.  Troy also appears to rock, but that remains to be seen.

So while I appreaciate some of the great service and response I get from some at&t employees, I'm overall really annoyed with at&t in general.  They have too many departments doing too many different things and no one appears to read any notes before they go throwing switches. 

I guess I'll go leave a few more voice mail messages now.

Friday, April 1, 2011

All Tied Up with Cables!

This month, one of our data center projects was to clean up the mess of cabling that had gotten out of hand after years of adds, moves and changes to switches and other equipment.  I find it interesting that with so many wireless devices around and so much talk of using virtualization and the cloud, we still spend so much time tangled in cords and cables!  Cable management can often be a challenge and this had become downright embarrassing.  Here is a before picture:


We took on a pretty extensive list of tasks as part of this clean up, including replacing server older networking components with a single new Cisco ASA.  While it's usually not recommended to make several logical and physical changes at the same time so you can avoid troubleshooting nightmares later, we were taking advantage of a planned power outage and wanted to accomplish as much as we can while we had everything turned off - including rebalancing all our servers on our power circuits, updating our UPS firmware and recabling every server and workstation port in the data center. 

Here is shot of the same racks after the project was nearly complete.  It's like night and day!

Everything is labled and color coded for ease of use.  And we were lucky that all of our servers, appliances and services were powered on and returned to service without much trouble.  This project also forced me to update several out-of-date diagrams and charts that are used for managing the network. 

While it was a crazy weekend with our own version of a "spaghetti western", the end result was well worth it!

Friday, February 18, 2011

Check out the Malware Response Guide

Microsoft recently published the new Malware Response Guide, officially known as the Infrastructure Planning and Design Guide for Malware Response

I reviewed this guide in its beta stages a few months ago and it was a great read and a very useful guide.  If you have limited "official" procedures in place for handling infections on workstations, this is a great way to start that discussion with team members and use some of the tools mentioned to develop a plan that is specific to your organization.

I think the structure is well thought out and very logical. One can easily switch to the course of action that fits the needs of the user and the organization, as well as follow the instructions for preparing an offline scanning kit. I also appreciate the recommendations for additional reading so that I can go more in depth for the products I'm using.

While this guide likely won't change my organizations use of a third-party solution at this time, it greatly complements it by providing other tools from Microsoft that can support my existing tools, or give me an alternate set of tools if my vendor isn't as quick to produce a particular solution for new malware.

I think this guide shows that Microsoft is willing to support systems in all types of scenarios and the information is not written to exclude organizations who aren't committed to only Microsoft software. It provides great processes and talking points to bring any organization closer to having a more cohesive malware response plan.  Take a moment to download it and check it out.

Friday, December 17, 2010

Google Calendar and the “Unsupported” Browser

A couple weeks ago, I started experiencing a curious problem with Google Calendar on my netbook.  I’m running IE 8 (8.0.7601.16562 to be exact) and every time I loaded up my calendar I got a message alerting me about using and unsupported browser.

“Sorry, you are trying to use Google Calendar with a browser that isn’t currently supported…”

Since I’m also using IE8 at work (version 8.0.7600.16385) without any calendar issues, I did what many sysadmins do when stuff doesn’t work on their own computers – I ignored it for a while, hoping it would just resolve itself.

However, today I did a little looking around and found the issue, which ironically is caused by the Google ChromeFrame Add-In.  I turned that off and the calendar now loads without any error messages.  The version of the add-in I had installed was ChromeFrame 8.0.552.224.

Monday, December 6, 2010

Take Aways from the Data Connectors Tech-Security Conference

Last week, I attended a free one-day conference hosted by Data Connectors.  Sometimes free conferences aren't worth the time it takes to get there, but I was really happy with this one.  While all the presentations were vendor sponsored, the majority were product neutral and really shared some decent content.  In addition to the vendor presentations, there was a decent sized expo area with other security vendors to peruse.

Here are some of the stats and tidbits I left with. As some of the themes overlapped throughout the presentations, so I'm not going to attribute each bullet point to a specific presenter.  However the presentations were sponsored by the following companies: WatchGuard, Axway, Sourcefire, Top Layer Security, JCS & Associates, Kaspersky Lab, Cyber-Ark, FaceTime and Arora / McAfee.  You can learn more about the presentations specifics and download some of the slide decks here on the event agenda page.

End Users
  • End users in the workplace expect to have access to the web and popular web applications, however 25% of companies need to update their policies related to web use. Instead of addressing the policy issues, companies simply block access to web applications entirely.
  • End users need more education about threats like email scams, pop-ups offering anti-virus solutions, links sent via social media sites, tiny URLs, etc. End users are your biggest threat - often due to error or accidents.
  • The average employee spends 3 hours a day doing non-work items on their computer.
General Company Security and Policies
  • Consider reviewing and improving on your file transfer management practices. How do people share data within your organization and externally? Is it secure and managed?
  • Most companies feel secure, but aren't really. Check out http://www.idtheftcenter.org/ for a list of companies that have experienced data breaches. Many companies simply rely on their vendors to declare that they are secure and protected.
  • Consider using different vendors to protect your data at different levels. Different vendors use different mechanisms to detect and deter threats.
  • As an administrator, you have to review logs on computers, firewalls, servers, etc. This way you are familiar with what is "normal" and can easily recognize potential breaches.
  • Consider data encryption as means to enable your company to meet regulation compliance. Encryption technology has evolved and it doesn't have to be as painful as it has been in the past.
  • You should patch all your computer regularly - don't forget that your printers, routers and switchers are computers too.
Browsers and the Internet
  • The top Internet search terms that are likely to lead you to site with malware on it are "screensavers" (51.9% chance of an exploit), "lyrics" (26.3%) and "free" (21.3%).
  • In 2009, the Firefox browser had the greatest number of patches and overall, vulnerabilities in applications exceeded operating system vulnerabilities.
  • The web browser is the #1 used application, but the patch cycle for browser add-ins is slower than for other applications and operating systems.
  • Drive-by downloads are still the #1 way to exploit computers.
Sometimes I leave conferences scared by the massive list of items that I feel I need to address, however, I left this conference with not only some tasks in mind, but some great leads on how to go about completing those projects.  Check out the Data Connectors events list to see if there is a similar conference coming up in your area in 2011.  They have well over two dozen other planned dates across the US, including Los Angeles in January and San Jose in February.

Wednesday, November 17, 2010

OfficeScan 10.5 - Installed, with some Oddities

I finally upgraded our office antivirus software to the lastest and greatest version from TrendMicro.  This has been on my list since spring time, and well, you know how those things go.  Because the server that was hosting our exisiting version is aging rapidly, I opted to install the new version on a new, virtual installation of a Windows 2008.

The installation went smoothly and lined up well with the installation guide instructions.  Once that was running, I was easily able to move workstations and servers to the new service using the console from the OfficeScan 8 installation.   Our OfficeScan 8 deployment had the built-in firewall feature enabled, which I opted to disable for OfficeScan 10.  Because of this, the client machines were briefly disconnected from the network during the reconfiguration and this information lead me to wait until after hours to move any of our servers that were being protected to avoid loosing connectivity during the work day.

Keep in mind that OfficeScan 10.5 does not support any legacy versions of Windows, so a Windows 2000 Server that is still being used here had to retain its OfficeScan 8 installation, which I configured for "roaming" via some registry changes.  This allows it to get updates from the Internet instead of the local OfficeScan 8 server.  Once that was done, I was able to stop the OfficeScan 8 service.

Some other little quirky things:
  • You can't use the remote install (push) feature from the server console on computers running any type of Home Edition of Windows.  I also has a problem installing on a Windows 7 machine, so I opted for doing the web-based manual installation. Check out this esupport document from Trend that explains the reason - Remote install on Windows 7 fails even with Admin Account.
  • I wanted to run the Vulnerabilty Scanner to search my network via IP address range for any unprotected computers.  However the documentation stated that scanning by range only supports a class B address range, where my office is using a class C range.  I couldn't believe that could actually be true, but after letting the scanner run a bit with my range specified and no results, I guess it is.
Overall, it was relatively quick and painless process.  I wish there had been some improvements to the web management console, like the ability to create customized views.  The "grouping" features seems a bit limited as well.

Next, I'll probably see that the client installation gets packaged up as an MSI, so we can have that set to automatically deploy using group policy.

Friday, October 22, 2010

DNS Transitioning within AT&T

It took several months of emails, phone calls and coordination, but I finally managed to get our office Internet connection switched from the "legacy" (aka "PacBell") frame relay to the newer AT&T fiber optic network.  This also included an upgrade in our connection speed, which is always a win.  Our IP address ranges were ported from the legacy account to the new service, so we had very little downtime during the cut over - it was a fantastic migration experience.

 After letting our new service settle in for a few weeks and since email responses from AT&T reps are often spotty or non-existent, I called up the customer service number to request that the legacy account be cancelled so we are no longer billed.  The representative I spoke to happily emailed me a "Letter of Authorization to Disconnect" that I would need to verify, sign and return.  Seemed pretty easy to me.

 As I reviewed the letter, I noticed a familiar account number referencing the Internet access, different than the billing account number.  It was the same account number that I used to request changes to our external DNS registrations. Bells went off in my head. Certainly those DNS entries would be ported to the new service with the IP address ranges themselves, right?  Right?

 To confirm, I started off with the tech support email for my new service.  They promptly replied, saying I needed to contact the DNS team and provided additional contact information.  I called the DNS team and explained my situation.  The representative confirmed, that no, they don't have any of our DNS records in their systems.  Our DNS records are with the legacy PBI group.  I'd have to submit a request to add the DNS records with the new group so that they had them in their name servers prior to the disconnect of the legacy service.  He was also nice enough to explain their system for requesting changes, which involved knowing a magic "CCI Number" for my account.  This CCI number which was totally new and different than anything else I knew about and which I promptly wrote down as an addition to my runbook.  (I swear, I learn something new about telecommunications every time I get off the phone with AT&T.)

Then I gathered up all the known external DNS records I had documented and sent an email to the legacy DNS group asking for a copy of my zone record so I could be sure I didn't miss anything.  Based on what I have on hand, it'll be a great time to do some housecleaning with our external zone records.   I will also need to update our domain registrars with the new name servers as well.

If all goes well, this will be sorted out in a few days and I'll be free of my old circuits and billing by the end of November.  If not, I'm sure I'll have another story to tell.

Thursday, October 14, 2010

What's in Your Runbook?

At least once a year, the time comes to re-address the documentation around the IT department regarding disaster recovery. One of the things I've been working on improving over the last two years is our network runbook. We keep a copy of this binder in two places - in our document management system (which can be exported to a CD) and in hard copy, because when systems are down the last thing you want to be unable to access is the documentation about how to make things work again. 

Here's a rundown of what I have in mine so far, it's in 10 sections:
  1. Runbook Summary - A list of all servers with their IP address, main purpose, a list of notable applications running on each and which are virtual or not. I also include a list of which servers are running which operating system, a list of key databases on servers and finally copies of some of our important passwords.
  2. Enterprise AD - A listing of all corporate domains and which servers perform what roles. I include all IP information for each server, the partitions and volumes on each and where the AD database is stored. Functional levels for the domain and forest are also documented.
  3. Primary Servers and Functions - This is similar to the Enterprise AD section, but it's for all non-domain controllers. I list out server information for file services, database servers and their applications and backup servers. I document shares, partition and volume information (including the size), important services that should be running and where to find copies of installation media.
  4. ImageRight - Our document management system deserves it's own section. In addition to the items similar to the servers in the previous section, I also include some basic recovery steps, dependencies and the boot sequence of the servers and services. Any other information for regular maintenance or activities on this system are also included here.
  5. Email / Exchange - This is another key system that deserves it's own section in my office. I include all server details (like above) and also completely list out every configuration setting in Exchange 2003. This will be less of an issue with Exchange 2007 or 2010 where more of the configuration information is stored in Active Directory. However, it makes me feel better to have it written down. I also include documentation related to our third-party spam firewall and other servers related to email support.
  6. Backup Details - A listing of each backup server, what jobs it manages and what data each of those jobs capture.
  7. Telecommunications - Details about the servers and key services. I also include information regarding our auto attendants, menu trees and software keys.
  8. Networking - Maps and diagrams for VLANs, static IP address assignments, external IP addresses
  9. Contacts & Support - Internal and external support numbers. Also include circuit numbers and other important identifying information.
  10. Disaster Recovery - Information about the location of our disaster recovery kit, hot line and website. A list of the contents of our disaster kit and knowledge base articles related to some of our DR tasks and hard copies of all our disaster recovery steps.
This binder is always in flux - I'm always adding and changing information and making notes, as well as trying to keep up with changes that other team members are making to the systems they work with most.  It will never be "done" but I'm hoping that whenever I have to reach for it, that it will always be good enough.

MS ITPro Evangelists Blogs

More Great Blogs