Thursday, March 29, 2007

Exchange Clustering Day 5.6.7.8.9 something

In the last few days we have been moving mailboxes and trying to iron out some issues that have come up.

One issue that came up was RUS (Recipient Update Service)service was pointing to an old domain controller that was decommissioned a LONG time ago. Any way we reconfigured that. The issues that came up were when a mailbox was moved it took forever in a day for the outlook client to reconnect back. And it should reconnect back in seconds. FIXED!

Another issue we were having is when moving the mailbox that had blackberry's associated with them the BB would not be able to send emails. I think this was b/c RUS was all eF'ed up too. FIXED!

We were also having Symantec Enterprise Vault issues when we had to reset up the services to archive the public folders. The application had to associate the service to the system mailbox and it could not see ANY mailboxes. So we rebooted the EV server and we where able to see all the mailboxes and chose the system mailbox for the EV service. FIXED!

Repathing SMTP. All of our incoming and outgoing emails go to MessageLabs. You can pretty much say they have an MX record for us. They only send emails to SMTP.domain.com and only recieve emails from A.B.C.D IP's we give them. If my current single exchange server is already working can't I just swap IP's. Well yes I can't and no it will not work 100% of the time. Here is why. My firewall objects points to my exchange server, switching the internal IP on the object will work for incoming emails. SMTP.domain.com points to a public IP that is NAT'ed to an internal IP. BUT... in a cluster what we have come to find is that even though the virtual exchange server that now SMTP.domain.com will get all emails going out is totally different. Which ever node in the cluster is the active one that's the IP that will be attached to the email header. So Messagelabs is seeing this new IP from the active node trying to send emails and is rejecting it even though the emails are coming from the virtual cluster with is registered with the correct external IP. It's the active node's IP that the emails hang on to. So I had to call Messagelabs and get them to add both external IPs of the active node that I created. Well it takes 4-6 hours to propagate. We send emails to a Messagelabs cluster so the changes I've added have to hit all server in thier cluster. I was able to get emails to go out b/c some towers in the cluster had the changes and some didn't. I'll just wait until they all have the changes to switch IP's later on tonight. So now all my emails are still flowing through the single (non-clustered) exchange server. Something to watch out for if you are exchange clustering. FIXED!(in a few hours)

Friday, March 23, 2007

Exchange Clustering Day 4

We've tested mailbox moves and all works well a very small mailbox. I am in the process of moving my mailbox (700mb. I estimate it will take 40 minutes to move b/c I'm doing this in the middle of the day and the server is busy.

One thing I have to keep in mind is that the current production Exchnage server is the only server that can send emails through the firewall. Also it's the only server that can send emails to Messagelabs. So what I will have to do is change the firewall object to point to the clusters internal virtual IP. This should allow the cluster to send and recieve emails without sending on behalf of the current exchange server like it is doing now for testing.

I am also testing our blackberry functionality with the mailbox move to a new cluster as well.

Thursday, March 22, 2007

Exchange Clustering Day 3

Today we are tweking the cluster and configuring replication on the public folders since that will take forever in a day. DAMN a day if will take a few days. We have a 100GB public information store. Yeah we really use our public folders. Hopefully today we can move a test mailbox over to the cluster and see the results.

Exchange Clustering Day 2

Day 2 was actually yesterday. It was a bit busy and frustrating at times. One of both servers were acting real funny right from the very beginning. On one the OS service pack 2 wasn't showing up in Add remove programs but it was installed. I even went ahead and installed it again three times. We went ahead and installed exchange on both servers and then created the exchange virtual server. The exchange virtual server was created but creating an IP Address Resource a Network Name Resource and a Physical Disk Resource then the System Attendant resource which allowed the cluster to show up in the Organization.

After that the failover from server A to B and back was taking enirely too long. Then Exchange SP2 wouldn't install on server A. The MSDTC service would always fail to start. So we followed some of the articles to remove and re-add it but the service never added back for some reason. So at the very end of the day I said made the decision to REFORMAT both servers and reinstall.

We've decided to look for some newer hardware drivers and firmware updates and found some and installed those. Then we replaces the heartbeat cross-over cable. Then reinstalled. Everything works so much better and the service pack was installed before the cluster was brought up this time, LOL!

That was yesterday.

Tuesday, March 20, 2007

Exchange Clustering Day 1

After getting the RAM and HBA's into the server, it racked, heartbeat connected and LAN connection we installed the OS and patched them. We've named them A and B and got the carving work all set on the SAN. We've carved up 64GB for logs, 100GB for private store and 150GB for public store. We've also carved us 500mb for the Quorum drive and 5GB for the exchange mounts. The exchange LUN is to minimize the amount of drive letters that will show up in the server. There will only be C: E: and Q: NO D: F: G: H: Why? Here is why. In the drive labled exchange there will be mount points to the transaction logs, the private information store and public information store. Normally these mounts would have been drive letters in my computer.

Then we'll turn off one of the servers. In this case B. We'll assign the the Quorum and exchnage LUN to server A and run diskpart to offset the disk for performance.

diskpart
select disk #
create partition primary align=64

Do this for each LUN as per EMC's best practice.

Shut down server A and bring up server B. On server B we'll just assign the LUNs that we've just assigned to A. (NOTE you technically are not suppose to assign a sign LUN to two servers the acception is in a cluster environment which we are implementing. This is why one server is turned off). Once assigned we can run cluster manager. It does not matter what server we are on as long as one of them is turned off.

On cluster administrator click open and create new cluster. Add the current server to the cluster. This server will be in the cluster alone for now and most importantly LOCK the shared LUN's so the other server cannot write to it when it's turned back on. Add the name of the other server in the wizard. Once done turn the other server back on. Open cluster administrator on that server ( the one just turned on) and run the wizard but select add node to cluster.

The cluster should be all set up now. You should see your heartbeat and LAN connections under networks. You will have to setup disks in the Cluster group for all your LUN's even the LUN's that are mount point in the Exchnage LUN created eariler. You should also see who the server owner is for the disk are at that given time. There can only be one server owner for the disks. You can change owners which will shift the disks over to the other server by right clicking on Cluster group and move group. This will move the disks manually over to the other server in the node. This will automatically happen in the event that something happens to the active server. I am setting up an active/passive cluster BTW.

Exchange install tomorrow.

VMware consolidation project, answer

To answer my own question of how a VMware server with 5 host and 4 HBA's (2 going to the SAN and 2 going to the XServeRAID) will share resourse?

I'll need ESX server and the ESX server will find all the hardware. It will act as the sole server connected to the SAN and XServeRAID. The five virtual server will no nothing of these storage devices. The ESX server will have the two paths to the SAN and two paths the the XServeRAID with two of each HBA. Once setup the ESX server will have the drives needed for each server before the actual virtual servers are installed. Then I'll install the OS's and assign the disks to each server.

I can't wait to tackle this project.

Friday, March 16, 2007

VMware consolidation project

As I am starting my installation for my Exchange cluster project at the same time I am thinking about future projects. What came to mind was to consolidate five servers into one VMware box. Sounds easy enought but I have questions that I am uncertain about. We have;

Intranet server
OWA server
file server (Lib)
file server (home directorys)
file server (img)

I wan't my IIS servers to pull their data from my SAN (separate LUNS). I want my File server (lib) to pull from the SAN also (separate LUN) that's the easy part. The other two server File server (home directory) and (img) I'd like them to pull data from an XServeRAID. Being that the XServeRAID is not a SAN (in our environment) I cannot give them their own LUNS. They would be sharing the same volume if I set it up in it's current state and that is not good practice. So I'd ether have to put one or the other on the SAN. Anyway the real question is if I have these five servers on one physical box with four HBA's (2 to the SAN and 2 to the XServeRAID). How would five servers share four HBA's using VMware? Are virtual HBA's setup? I'll have to find out the answer to that.

VMware vendors don't call me, I'll call you.

Thursday, March 15, 2007

Update 4 External file hosting

I've also been trying to identify an external file hosting solution. I won't get into names as i don't want to directly give away the industry I work in. But some that can relate can figure that out. Or you can just ask me via email. Anyway for 200GB and 1000 users they want to charge us 90K a year to host some files for us. I can do that for a fraction of that as Verizon FIOS is available in my neighborhood now :D They've made the same joke and said it's not the storage alone we are paying but the server and the redundancy and X and Y and Z. I think it's the name that associated with them that make the price to high. Afterall they do make the applications we use the generate our large files.

Update 3 Symantec FSA

We've got Sysmantec's FSA (file system archive) installed and running. We are using this to clean up old files of the production file server. We have 1.2TB of Adobe PSD files on the file server older that 60 days. Some of you have a total of that for all your storage. Well that is a fraction of our storage and it's easily noticable via running tools to identiy files by extension and size. We are removing these files and leaving pointers so that we can get back space for our Exchange Cluster project that start next week.

Update 2 Cisco IPT

I've been dealing with our Cisco rep and our IPT implementer trying to figure out why the deployment of key features that we've liked to roll out the the user base is so painful.

The installation of our IPT system went perfectly. The migration the the new system went well also. It's been up and running for a few months now and we are satisfied with the phone system itself. What we are not satisfied with is that features of the system require different password. Pretty much every feature of Cisco Unity Connection has a separate password with different credentials. What I mean is when we setup user template originally with Unity Connection the password is 8 characters. So we went with that not knowing that when we've setup other features these features required passwords that required 6 characters. HUH! So change one set to match everything, easy fix right? WRONG! When setting up the original users on the system before rolling out these additional features you can not change the template without wiping out the user. The Template is set in stone. So as a result the users had a voice mail password of 8 characters and a PCA (Personal Communications Assistant) password of 6 characters. Not to mention a Windows password that expires every 90 days. Also we wanted to roll out the IMAP feature that allows your voice mail to show up in you outlook in a different mailbox that also has a password. You see what I'm getting at. Too many DAMN password using Unity Connection and nobody told us this when we were buying it.

These little things are easily overlooked of too stupid to even ask if you are buying a 250K phone system. You would thing Cisco would streamline some of these features and the dumbest thing an ordinary person can think of would be there. Sadly that is not the case.

Another gripe with Cisco's IPT is that we went with Unity Connection b/c we did NOT want our voice mails stored on our exchange server which is what Unified Messaging does. Unified Messaging has all these feature single password (I think) but the VM's are stored inside the Exchange server. That's a NO NO on so many levels. Some companies have very strict email policies and emeil are deleted every X days. If VM's are in there they are automatically treated like emails and wiped out. Financial firms come to mind. Law firms also. So this is why Unity Connection is there. VM's are not stored on the Exchange server. They are kept in the Unity server. But Unity Connection has all the crap I discussed up top. So why don't they have the best of both worlds? Who knows! The ideal product will be Unified Messaging with the ability to pick where you wanted to store your voice mails. If I wanted them in my Exchange server that would be the default. If I wanted them on another server server with links to exchange that should be an option. It can be done this is America and we are in 2007 anything can be done. We use Symantic Enterprise Vault to archive emails that pulls emails out of the Exchange server and stores them in a database on another server/storage device but leaves a pointer to those archives. One click and it's back in seconds. The same thing can happen with VM too Cisco. Wake up!

Update 1 Exchange

So what has been going on since my last post that was on.......January 4th? Well back in January One of our exchange servers had a hardware failure. This server was in our remote office in London. The server was down for a day or two b/c there was no hardware 4 hours turn around time waranty on parts. As a result of that some very important people could not get email on that end. This important person wanted this to never happen again. So I come up with a solution for both offices. (oh just to be clear I do not administer the london office, my counter part does that. My servers have 4 hour turn around time on parts.) With that said I've come up with a solution for both offices.

My solution is to cluster the exchange servers in both offices. Increasing the rundancy of the server. Both offices currtly have one exchange 2003 server. All email flows come into the NY office and goes to the LD office across our private line. Next week we will start this project in both offices. My solution requires two new exchnage server running on MS Server 2003 Enterprise Edition. The information stores will be held on our SAN and in the SAN in the LD office. This is going to set ourselves up to start cloning the databases for backups and do away with tape, sort of. Our current exchnage 2003 server will act as a restore server. It will be hanging off the cluster so to speak and in the event of restoring that server will be the one mounting the database (information store).