Tuesday, September 29, 2009

Microsoft Security Essentials

This evening I installed Microsoft Security Essentials on my Samsung NC10. I replaced the free Avast! scanner that I've been using since installing Windows 7. Avast! certainly appeared to be meeting my needs, however I was hoping to lighten the load on the basic hardware this netbook is sporting.

The MSE installation was quick and easy, the longest part was waiting for the initial full scan that took about 8 minutes. The application seems very lightweight and has very few "moving parts" to configure. Outside of adjusting the schedule for the full scan and the desired actions for the various threat levels, it's good to go. It's advisable to check out what the "recommended levels" for the threat level responses are online (there's a link) or in the help file, just so you have an understand of how it's going to react. Unless you have some deep desire to review everything before it's removed, I think the default settings will meet the needs of most.

The last setting that probably warrants a little attention is the level of information you can opt to send to Microsoft SpyNet. Now, while the name might be a little suspect, SpyNet the "community" all users running MSE must be part of to use the software. The basic setting will send information about detected malware, the action that was taken and the success or failure of that action. The advanced setting will also include the file names and location of the malware, how the malware operates and how it affected your computer. All this data is aggregated to improve signature updates and detection methods. It's not possible to control which incidents you submit, so pick the level you are most comfortable with and accept that providing this data is part of what makes it "free" and will keep it up-to-date and useful.

Finally be sure to check out the Microsoft Security Essentials Community, part of the Microsoft Answers forums for Windows, Security Essentials and Windows Live. There are some lively threads about the feature set of MSE, as well as tips for troubleshooting and submitting possible bugs.

All in all, it seems like this product will fit right in with the other free scanners available and will be suitable for the average home user or very small business that doesn't have a central way of managing virus and malware prevention.

Sunday, September 27, 2009

24 Hours Offline - Connectivity is Addictive

I'm addicted to being connected. I admit it.

I went away with some friends for a couple days on a road trip to the Yosemite area this weekend. As soon as we left the major areas of civilization and began traveling through farmland, valleys and mountains my cellular signal became spotty and then abruptly failed.

My blackberry transformed from my link to friends, family and information into a pocket-sized camera, alarm clock and tip calculator. And while it was handy to have those things, I sorely missed my instant access to information about the sights we came across, sharing pictures and comments with friends near and far via Twitter and Facebook, and just "knowing" what was going on even though I wasn't home making my way through my regular routine.

Instead, I enjoyed the informational displays provided by the park services about the places we visited. Shared my thoughts with those people directly around me. And much like the days before constant connectivity - I snapped photos of things to share with others later, though I wouldn't have to wait a week to develop the film.

One of the friends joining us joked several times about my addiction to connectivity. Yet, he didn't seem to mind when I found that 2 bars worth of the free wi-fi at our campsite trickled down to one of our cabins and I could schedule the DVR at home to record a football game he'd forgotten about out.

I went through phases of being relaxed about being cut off from the world, and phases of being frustrated by the "X" in the spot where my signal should have been. I'm glad to have had the chance to get away for this adventure, but you can bet I was thrilled when we broke out of the dead-zone and I was able watch 24 hours of emails and SMS messages flood my phone like a dam had been opened.

I think it's okay that the stream of electronic data and the flow of the babbling brook outside our cabin door both have a place in my life. Though I think a few well-placed signs warning that "cellular coverage will end in 5 miles" would help me with the transition. Addicts can't always go cold turkey, you know.

Monday, September 21, 2009

Moving Up to RTM for Windows 7

Finally found a moment to install the RTM version of Windows 7 Ultimate on my netbook. I know there was some grumbling on the Internet about how one can't directly upgrade from the RC to RTM of Windows 7, but I hardly found it a problem. I'm a big fan of clean installs.

First off, I used a USB key for the install instead of the DVD, which completely sliced of the time it took for the installation down to about 45 minutes. For tips on how to set your USB key up for this task, check out this TechNet Tip on Using a USB Key to Install Windows 7.

I did backup all my personal documents before running the installation, but counted on the fact that the "windows.old" directory would have everything I'd want to transfer. Sure enough, it only took a couple clicks to return my documents, photos and music back to their rightful place. I had to handle a few driver issues again (see my previous post about installing windows on the Samsung NC10) so could get my function keys working properly. But since I had downloaded those before, they were also in the "windows.old" directory.

The only thing left after that was the applications I use on a regular basis. The majority of the applications I use are web-based, so I was left with reinstalling AV software, iTunes, Firefox, OpenOffice and a few other minor items. Once that was done, I was back in action.

I admit I didn't do everything all in one setting, but overall I don't think it would have taken me more that 2.5 hours from start to finish if I had. Not too shabby.

Restoring ImageRight in the DR Scenario

Our document imaging system, ImageRight, is one of the key applications that we need to get running as soon as possible after a disaster. We've been using the system for over 2 years now and this is the first time we've had a chance to look closely at what would be necessary in a full recovery scenario. I'd been part of the installation and the upgrade of the application, so I had a good idea of how it should be installed. Also, I had some very general instructions from the ImageRight staff regarding recovery, but no step by step instructions.

The database is SQL 2005 and at this point it wasn't the first SQL restoration in this project, so that went relatively smoothly. We had some trouble restoring the "model" and "msdb" system databases, but our DBA decided those weren't critical to ImageRight and to let the versions from the clean installation stay.

Once the database was restored, I turned to the application server. A directory known as the "Imagewrt$" share is required as it holds all the installation and configuration files. We don't have all the same servers available in the lab, so we had to adjust the main configuration file to reflect the new location of this important share. After that, the application installation had several small hurdles that required a little experimentation and research to overcome.

First, the SQL Browser service is required to generate the connection string from the application server to the database. This service isn't automatically started in the standard SQL installation. Second, the ImageRight Application Service won't start until it can authenticate its DLL certificates against the http://crl.verisign.net URL. Our lab setup doesn't have an Internet connection at the moment so this required another small workaround - temporarily changing the IE settings for the service account to not require checking the publisher's certificate.

Once the application service was running, I installed the desktop client software on the machine that will provide remote desktop access to the application. That installed without any issue and the basic functions of searching for and opening image files were tested successfully. We don't have the disk space available in the lab to restore ALL the images and data, so any images older than when we upgraded to version 4.0 aren't available for viewing. We'll have to take note of the growth on a regular basis so that in the event of a real disaster we have a realistic idea of how much disk space is required. This isn't the first time I've run short during this test, so I'm learning my current estimates aren't accurate enough.

Of course, it hasn't been fully tested and there are some components I know we are using in production that might or might not be restored initially after a disaster.
I'm sure I'll get a better idea of what else might be needed after we have some staff from other departments connect and do more realistic testing. Overall, I'm pretty impressed with how easy it was to get the basic functionality restored without having to call ImageRight tech support.

Friday, September 18, 2009

Failed SQL Restores That Actually Succeed

This week's adventure in disaster recovery has been with one of our in-house SQL applications. The application has several databases that need to be restored and we find that the Backup Exec restore job is reported as having failed with the error of "V-79-65323-0 - An error occurred on a query to database ." This error doesn't prevent SQL from using the databases properly and hasn't appeared to affect the application.

Once the job completes Backup Exec also warns that the destination server requires a reboot. We are speculating that Backup Exec is unable to do a validation query to the restored database due to the need for the reboot, so the error is somewhat superfluous.

We are going to experiment a bit to see if turning of the post-restore consistency checks eliminate this error in the future, but for the moment we just opted to note the error in our recovery documentation so we don't spend time worrying about during another test or during a real recovery scenario.

We've also found that for some reason, it's very important to pre-create subfolders under the FTData folder before restoring the databases. If these folders related to the full text index aren't available the job will fail, too. This has required our DBA to write some scripts to have available in the event of the restore to create these directories, as well as drop and recreate the indexes once everything is restored.

While I appreciate learning more about the database backend of some of our applications, I'm so glad I'm not a DBA. :-)

Thursday, September 17, 2009

Paper vs. Electronic - The Data Double Standard

One of the main enterprise applications I'm partly responsible for administering at work is our document imaging system. Two years have passed since implementation and we still have some areas of the office dragging their feet about scanning their paper. On a daily basis, I still struggle with the one big elephant in the room - the double standard that exists between electronic data and data that is on paper.

The former is the information on our Exchange server, SQL servers, financial systems, file shares and the like. The the latter is the boxes and drawers of printed pages - some which originally started out on one of those servers (or a server that existed in the past) and some which did not.
In the event of a serious disaster it would be impossible to recreate those paper files. Even if the majority of the documents could be located and reprinted any single group of employees would be unable to remember everything that existed in a single file, never mind hundreds of boxes or file cabinets. In the case of our office, many of those boxes contain data that dates back decades, containing handwritten forms and letters.

Like any good company, we have a high level plan that dictates what information systems are critical and the amount of data loss that will be tolerated in the event of an incident. This document makes it clear that our senior management understands the importance of what the servers in the data center contain. Ultimately, this drives our IT department's regular data backup policies and procedures.

However, IT is the only department required by this plan to ensure the recovery of the data we are custodians of.
What extent of data loss is acceptable for the paper data owned by every other department after a fire or earthquake? A year of documents lost? 5 years? 10 years?
No one has been held accountable for answering that question, yet most of those same departments won't accept more than a day's loss of email.

Granted, a lot of our paper documents are stored off site and only returned to the office when needed, but there are plenty of exceptions. Some staffers don't trust off site storage and keep their "most important" papers close by. Others in the office will tell you that the five boxes next to their cube aren't important enough to scan, yet are referenced so often they can't possibly be returned to storage.

And there lies the battle we wage daily as the custodians of the imaging system, simply getting everyone to understand the value of scanning documents into the system so they are included in our regular backups. Not only are they easier to organize, easier to access, more secure and subject to better auditing trails, there is a significant improvement in the chance of the survival when that frayed desk lamp cord goes unnoticed.

Wednesday, September 16, 2009

Windows Server July 2010 Support Changes

On July 13, 2010, serveral Windows Server products will hit new points in their support lifecycle. Windows 2000 Server will move out of Extended Support and will no longer be publicly supported. Windows Server 2003 and Server 2003 R2 will be moving from Mainstream Support to Extended Support. Extended Support will last another 5 years.

This forces a new deadline on the some of the improvements that need to be planned at my office. Our phone system and our main file server are still operating on 2000 Server. I have been planning to upgrade the phone system for a long time now, but it continually gets pushed back due to other more pressing projects. Our file server is an aging, but sturdy, HP StorageWorks NAS b3000 - "Windows-powered" with specialized version of 2000 Server. Both deserve more attention than they've been getting lately, so now there is a reason to move those items higher up on the list.

For more information about these support changes, check out "Support Changes Coming July 2010" at the Windows Server Division Weblog.

Tuesday, September 15, 2009

More Windows 7 Beta Exams

There are two new Windows 7 beta exams available for a short time. As with most beta tests, you won't find any official study material and likely won't have enough time to read it anyway. However, if you've been using and testing Windows 7 since it's beta days, it's worth a shot to take one of these exams.

The promo code will get you the test for free and you'll get credit for the real deal on your transcript if you pass. Seats and time slots are VERY limited, so don't waste time thinking about it too long.

71-686: PRO: Windows 7, Enterprise Desktop Administrator
Public Registration begins: September 14, 2009
Beta exam period runs: September 21, 2009 - October 16, 2009
Registration Promo Code: EDA7

When this exam is officially published (estimated date 11/16/09), this exam will become the official 70-686 exam, which is one of two exams needed for the the MCITP: Enterprise Desktop Adminstrator 7 certification.

71-685: PRO: Windows 7, Enterprise Desktop Support Technician
Public Registration begins: September 14, 2009
Beta exam period runs: September 14, 2009 - October 16, 2009
Registration Promo Code: EDST7

When this exam is published (also on 11/16/09) as 70-685, it will be credit toward the MCITP: Enterprise Desktop Support Technician 7 certification. This certification isn't listed yet in the MCITP Certification list but I suspect it will be paired with the 70-680 exam, much like the Enterprise Desktop Administrator 7 certification.

Information about these and other exams can be found at Microsoft Learning.

Restoring Exchange 2003

After getting Active Directory up and running in my disaster recovery lab, my next task was to restore the Exchange server. The disaster we all think of most in San Francisco is an earthquake, which could either make transportation to the office ineffective or could render the office physically unsafe. Email is one of our most used applications and in a disaster we predict it will be the primary means of communication between users working from outside of the office. Assuming a functional Internet backbone is available to us, email is will likely be the fastest way to get our business communications flowing.

Restoring Exchange 2003 is a straight-forward process, you have all the previous configuration information available to you. In our DR kit, we have a copy of Exchange 2003 and the current Service Pack we are running. We also have a document that lists out the key configuration information. Before you restore, you'll want to know the drive and paths for the program files, the database and the log files. If the recovery box has the same drive letters available, the restoration is that much smoother when you can set the default installation paths to the same location and ensure that you'll have the necessary amount of space free.

It's important to remember to install Exchange and the Service Pack using the /DisasterRecovery switch. This will set up the installation to expect recovered databases instead of automatically creating new ones. I had manually mount the databases after the restoration, even though I had indicated in the Backup Exec job to set the databases to be remounted when the restore was complete. Microsoft KB 262456 details the error event message I was getting about the failed MAPI Call "OpenMsgStore", which was a confirmed problem in Exchange 2000/2003.

Sunday, September 13, 2009

Don't Overlook the Metadata Cleanup

It seems inevitable that while restoring Active Directory in a disaster recovery scenario, one is going to feel rushed. Even with this being a test environment, I felt like getting AD back was something that needed to be quick so we could move onto the more user-facing applications, like Exchange.

My network has two active directory domains, a parent and a child domain in a single forest. The design is no longer appropriate for how things are organized for our company and we've been slowly working to migrate servers and services to the root domain. Right now, we are down to the remaining 3 servers in our child domain and one remaining service account. The end is in sight, but I digress.

The scope of our disaster recovery test does not involve restoring that child domain. This is becoming an interesting exercise, because it will force us to address how to get those few services that reside in that domain working in the DR lab. This will also help us when we plan the process for moving those services in production.

Bringing back a domain controller for my root domain went by the book. I could explain away all of the random error messages, as they all were related to this domain controller being unable to replicate to other DCs, as they hadn't been restored. I had recovered the DC that held the majority of the FSMO roles and sized the others. I started moving onto other tasks, but I couldn't get past the errors about this domain controller being unable to find a global catalog. All the domain controllers in our infrastructure are global catalogs, including this one, as I hadn't made a change to the NTDS settings once it was restored.

So I took the "tickle it" approach and unchecked/rechecked the Global Catalog option. The newly restored DC successfully relinquished its GC role and then refused to complete the process to regain the role again. It was determined to verify this status with the other domain controllers it knew about, but couldn't contact.

I knew for this exercise, I wasn't bringing back any other domain controllers. And in reality, even if I was going to need additional DCs, it was far easier (and less error-prone) to just promote new machines than to bother restoring every DC in our infrastructure from tape. (However I still back up all my domain controllers, just to be prepared.)

To solve the issue, I turned to metadata cleanup. Using NTDSUTIL, I removed the references to the other DC for root domain, the DC for the child domain and finally, the lingering and now orphaned child domain itself. I also had to go into "AD Domains and Trusts" to delete the trust to the child domain, which wasn't removed when the metadata was deleted. Once all these references were removed, the domain controller successfully was able to assume the global catalog role and I could comfortably move on to restoring our Exchange server.

And I've learned that just because I can explain an error, doesn't mean I can ignore it.

Thursday, September 10, 2009

AD Recycle Bin - New in Server 2008 R2

This week I continued with disaster recovery testing in our lab, the first machine restored from tape being one of our domain controllers. While checking over the health of the restored Windows 2003 active directory, I remembered that we are using a third-party tool in production to aid in the recovery of deleted items - Quest's Active Directory Recovery Manager. To be honest, we haven't had a reason to use the software since we installed it, which I suppose is a good thing. But it is a stress reliever to know that it's there for us.

Restoring this product in our test lab isn't part of the scope of this project, but it does have me looking forward to planning our active directory migration to Server 2008 R2, which includes a new, native "recycle bin" feature for deleted active directory objects. You can find more details about how this feature works in Ned Pyle's post on the Ask the Directory Services Team blog, The AD Recycle Bin: Understanding, Implementing, Best Practices, and Troubleshooting.

While the native feature doesn't have the ease of a GUI and requires your entire forest to be at the 2008 R2 functional level, it's certainly worth becoming familiar with. Once I'm done with all this disaster testing, you can be sure this feature will on the top of my list to test out when I'm planning that upgrade.

Wednesday, September 9, 2009

Check Out TechNet Events

Today I enjoyed a morning at the Microsoft office in SF attending an event in the current series of TechNet Events. Through the months of September and October, the TechNet Events team is traveling around the US providing tips, solutions and discussion about using Windows 7 and Server 2008 R2.

Today's presentation was given by Chris Henley, who led some lively and informative discussions on three topics - Tools for migration from Windows XP to Windows 7, Securing Windows 7 in a Server 2008 R2 Environment (with Bitlocker, NAP and Direct Access) and new features in Directory Services.

I was excited to see specific information on Active Directory. If you missed the blogs about Active Directory Administrative Center back in January like I did, you'll like some of the new features in this 2008 R2 tool, including the ability to connect to multiple domains and improved navigation views.

If there isn't an event near you this time around, check back after the holidays when they'll head out again for another series.

Tuesday, September 8, 2009

64-bit ImageRight support? - The "drivers" are in control.

The disaster recovery testing is touching more areas then I even though possible related to what options we can consider in our production and emergency environments. It's bringing to light how interconnected software has become, and how those connections can sneak up on you, even when one is dealing with them everyday.

A basic premise of our recovery plan is to provide access to our recovered systems remotely, until we can make office space and desktop systems accessible to everyone. In order to keep things "simple" and provide the quickest possible up time, the plan calls for using Windows Terminal Services (aka "Remote Desktop Services" in 2008 R2) technology.

Due to the improvements in the offerings available directly by Microsoft related to remote access and the relatively small number of applications we need to make available, we determined that bringing terminal services up initially would be faster than recreating our Citrix environment during an emergency.

In conjunction with this (and the fact that we have only a small amount of remote use in production) we are currently planning to reduce licensing costs by only providing access using Microsoft products. Windows Server 2008 (and now R2) has many of the features we were looking to Citrix for in the past. While it's possible for us to meet most of our needs with Server 2008, we'd much prefer to use 2008 R2.

While I was at the Vertafore Conference, one of my goals was to find out their schedule for 64-bit support. As one of our main enterprise applications, its important that it's available on our remote access solution. Since I was unable to run the software on my 64-bit Windows 7 computer, I wanted know how far they were from addressing that.

Turns out, it all comes down to third-party drivers for peripherals. ImageRight works with several popular hardware vendors when it comes to scanners, including Kodak, Canon and Fujitsu. This allows customers to take advantage of more of the built-in scanner features that come with the hardware, instead of writing a generic scanner driver that could reduce the functionality native to the device. They also use the drivers to provide desktop features that allow end users to import documents directly from their PC.

Because of this, 64-bit support for the ImageRight software is directly related to how quickly scanner vendors make 64-bit drivers available. ImageRight claims that the makers of these key peripheral devices are complaining that Microsoft didn't give them enough warning between Windows Server 2008 and the release of Server 2008 R2 regarding the official "death" of the 32-bit version of the OS to provide 64-bit drivers for all their devices.

ImageRight is planning to have support for 64-bit operating systems by the end of this year. We aren't planning on a widespread upgrading of desktop hardware to 64-bit any time soon and will be able to wait without too much suffering. However, it does alter our plans for our remote access changes in the next 3-6 months. A disappointment for sure.

Also, the delay doesn't help existing ImageRight clients or upcoming new ones that hope to run (or at least begin to test) an important software product on the most current hardware and operating systems available. An interesting domino effect that ends in needing to reconsider what I'll be using for remote access during my recovery testing this month.

Saturday, September 5, 2009

Windows 7 Tidbits

I found some interesting Windows 7 things online recently and wanted to share.
  • Check out the agenda for the Windows 7 Online Summit being held on October 7th. I'm hoping I'll be able to keep people from interrupting me at work for a few hours that day.
  • Finally, if you haven't had a chance to demo Windows 7 and you aren't a TechNet/MSDN subscriber or a volume license customer, the Enterprise version is now available as a 90-day trial, from the TechNet Springboard.

Britian's Blast from the Past

The National Museum of Computing in Bletchley Park is pushing aside the mothballs and reassembling a computer they've had in storage since 1973. Check out the Harwell WITCH story on Wired.com's Gagdet Lab.

If you are looking for something more modern, take a look at the story behind the new Wired.com office kegerator, dubbed the Beer Robot. The exterior design was done by my husband - I'm not ashamed to flaunt his design skills.

Thursday, September 3, 2009

Disaster Recovery Testing - Epic Fail #1

As I've mentioned before, my big project for this month is disaster recovery testing. A few things have changed since our last comprehensive test of our backup practices and we are long overdue. Because of this, I expect many "failures" along the way that will need to be remedied. I expect our network documentation to be lacking, I expect to be missing current versions of software in our disaster kit. I know for a fact that we don't have detailed recovery instructions for several new enterprise systems. This is why we test - to find and fix these shortcomings.

This week, at the beginning stages of the testing we encountered our first "failure". We've dubbed it "Epic Failure #1" and its all about those backup tapes.

A while back our outside auditor wanted us to password protect our tapes. We were running Symantec Backup Exec 10d at the time and were happy to comply. The password was promptly documented with our other important passwords. Our backup administrator successfully tested restores. Smiles all around.

We faithfully run backups daily. We run assorted restores every month to save lost Word documents, quickly migrate large file structures between servers, and to correct data corruption issues. We've had good luck with with the integrity of our tapes. More smiles.

Earlier this week, I load up the first tape I need to restore in my DR lab. I typed the password to catalog the tape and it tells me I have it wrong. I typed it again, because it's not an easy password and perhaps I had made a mistake. The error message appears, my smile did not.

After poking in the Backup Exec databases on production and comparing existing XML catalog files from a tape known to work with the password, we conclude that our regular daily backup jobs simply have a different password. Or at least the password hash is completely different, yet this difference is repeated across the password protected backup jobs on all our production backup media servers. Frown.

After testing a series of tapes from different points in time from different servers, we came the the following disturbing conclusion: The migration of our Backup Exec software from 10d to 12.5, which also required us to install version 11 as part of the upgrade path, mangled the password hashes on the pre-existing job settings. Or uses a different algorithm, or something similar with the same result.

Any tapes with backup jobs that came from the 10d version of the software use the known password without issue. And any new jobs that are created without a password (since 12.5 doesn't support media passwords anymore) are also fine. Tapes that have the "mystery password" on them are only readable by a media server that has the tape cataloged already, in this case the server that created it. So while they are useless in a full disaster scenario, they work for any current restorations we need in production. We upgraded Backup Exec just a few months ago, so the overall damage is limited to a specific time frame.

Correcting this issue required our backup administrator to create new jobs without password protection. Backup Exec 12.5 doesn't support that type of media protection anymore (it was removed in version 11) so there is no obvious way to remove the password from the original job. Once we have some fresh, reliable backups off-site I can continue with the disaster testing. We'll also have to look into testing the new tape encryption features in the current version of Backup Exec and see if we can use those to meet our audit requirements.

The lesson learned here was that even though the backup tapes were tested after the software upgrade, they should have been tested on a completely different media server. While our "routine" restore tasks showed our tapes had good data, it didn't prove they would still save us in a severe disaster scenario.

Wednesday, September 2, 2009

Why Certify? Or Not?

Last night, I presented a brief overview of current Microsoft certifications at the PacITPros meeting. One of the questions that came up was how to determine the ROI of getting certified. Right now, I'm in the early stages of updating my messaging certification from Exchange 2003 to Exchange 2007. My office pays for exam fees, so I like to take advantage of that when I can. But why certify at all?

For me, it's not a "bottom line" calculation. I do it as a motivator to keep learning. The nature of the business where I work means we tend to deal with a lot of dated software and don't always have a need to upgrade to the latest or greatest of anything. We usually run about 3 years behind, particularly with Microsoft software, though that has been changing. If I wasn't personally interested in staying current, I could easily let my skills lag behind.

Getting a certification in a specific technology gives me something tangible to work towards. By using some extra lab equipment at the office and making time to read, I can have a little fun and stay up to date on technology that will eventually get deployed in production.

Certification isn't a perfect science. I know that the exams aren't always in line with real production situations, but they have been improving over the years. And I know there are people on the ends of the spectrum -those that have great skills or experience with no certifications and others with limited experience and a series of letters after their name. I aim for balance. I stick with the topics and products that are in line with what I work with regularly so I can be confident that taking the time to study is going to provide value.

Right now, getting a certification doesn't end in extra bonuses or a higher salary grade. But maybe one day it will be the item that stands out on my resume when compared to others with similar experience. Or show that I have the ability to set a goal and follow through. Or perhaps I'll just enjoy challenging myself - certainly no harm in that!

MS ITPro Evangelists Blogs

More Great Blogs