It's all about documentation and diagrams so other can know what has been done. Here is my new network diagram. It looks so simple but it is very complex on the inside. The complexity can be seen in the documentation that you cannot and will not see ;)
New diagram
Old diagram
Wednesday, May 31, 2006
Sunday, May 28, 2006
Apple Xserve RAID in a Windows 2003 server environment
We've faced a few storage crisis in the past. The main one being the production data. We have also been faced with archive storage problems. What we use to do years ago was when a project (a huge root folder gigs in size) was ready to be archived we use to in this order;
-make a tape backup and remove the files from the network
-burned a lot of CD's and remove the files from the network
-burned DVD's and remove the files from the network
-copy the project to a NAS device
-copy the project to cheap large IDE/SATA drives
That is a basic timeline of our archive process. This has become very inefficient. Even though everything was properly labeled and could be found it was still inefficient. Why? It could take a long time to search through the amount of single backup tapes, CD's, DVD's and archive locations on the network and NAS devices for a specific piece of data. Also the data was all over the place as you can see.
We resolved these problems by revisiting centralized storage again. An expensive SAN for dead storage is a waste of a lot of money (depending on your environment and your budget). A bunch of cheap storage is also a waste b/c you will be buying A BUNCH of device eliminating the centralization factor. Also when those device get remodeled and decommissioned you are stuck with a bunch of devices of different storage sizes and device shapes and styles. Our choice was Apples Xserve RAID.
We decided on the Apple Xserve RAID after careful research. Our biggest question was b/c it was an apple product will our Windows servers see the storage? Also will our Windows servers see a volume over 2 TB. Windows use to have a 2 TB limit on their volumes. This was back in the early Windows 2000 server days. Then Microsoft came out with Dynamic disks. Where you can use 2 basic disks to create a dynamic disk larger than 2 TB. Read more about it here,Reviewing Storage Limits
After we were confident that our servers will be able to create a volume over 2 TB we placed the order for their 7 TB Xserve RAID. It cost us $13,000. Talk about cheap storage.
The device is by no means a cheap POS though. It has 2 storage processors, 2 fiber connections, 2 power supplies and 14 hot swap Ultra ATA 500GB drives. This is a very nice piece of hardware just like all of apple's hardware products they did not slack on this one. As good as it looks it was even easier to setup. The has 2 of everything including network interfaces. These interfaces are set for DHCP so you just plug the device in and they are on your network. Apple's RAID Admin tool that comes with OS X but they also have a Java based one for Windows environments finds the device on your network. You can now change the IP's of both interface to conform with your server or storage device IP addressing. within in RAID Admin you can manage the device like any other enterprise level storage device. We are not using the Xserve RAID as a true SAN so will are not managing drive space in that manner. We are using it as NAS on steroids or a limited SAN. Why I say that is b/c we have connected the Xserve RAID to our fiber channel switch and zoned it out to it's host server. Our Windows 2003 server detects the Xserve RAID as if it were an internal drive or a SAN drive.
So far the Xserve RAID is working great. After creating the partition in Windows disk manager and waiting 28 hours for the drives to initialize (yes 28 hours) the device has been working flawlessly. Windows 2003 server sees 5.5 TB out of the 7 TB raw so keep that in mind that you will lose 1.5 TB to overhead. We have populated the device with 2.5 TB or archive data so far and we still have a lot of data left to add. Will we needs another one soon? I hope so b/c these device are very nice to have on your network. It just makes managing and centralizing storage so easy, cheap and simple.
Front of Apple Xserve RAID
Back of Apple Xserve RAID
-make a tape backup and remove the files from the network
-burned a lot of CD's and remove the files from the network
-burned DVD's and remove the files from the network
-copy the project to a NAS device
-copy the project to cheap large IDE/SATA drives
That is a basic timeline of our archive process. This has become very inefficient. Even though everything was properly labeled and could be found it was still inefficient. Why? It could take a long time to search through the amount of single backup tapes, CD's, DVD's and archive locations on the network and NAS devices for a specific piece of data. Also the data was all over the place as you can see.
We resolved these problems by revisiting centralized storage again. An expensive SAN for dead storage is a waste of a lot of money (depending on your environment and your budget). A bunch of cheap storage is also a waste b/c you will be buying A BUNCH of device eliminating the centralization factor. Also when those device get remodeled and decommissioned you are stuck with a bunch of devices of different storage sizes and device shapes and styles. Our choice was Apples Xserve RAID.
We decided on the Apple Xserve RAID after careful research. Our biggest question was b/c it was an apple product will our Windows servers see the storage? Also will our Windows servers see a volume over 2 TB. Windows use to have a 2 TB limit on their volumes. This was back in the early Windows 2000 server days. Then Microsoft came out with Dynamic disks. Where you can use 2 basic disks to create a dynamic disk larger than 2 TB. Read more about it here,
After we were confident that our servers will be able to create a volume over 2 TB we placed the order for their 7 TB Xserve RAID. It cost us $13,000. Talk about cheap storage.
The device is by no means a cheap POS though. It has 2 storage processors, 2 fiber connections, 2 power supplies and 14 hot swap Ultra ATA 500GB drives. This is a very nice piece of hardware just like all of apple's hardware products they did not slack on this one. As good as it looks it was even easier to setup. The has 2 of everything including network interfaces. These interfaces are set for DHCP so you just plug the device in and they are on your network. Apple's RAID Admin tool that comes with OS X but they also have a Java based one for Windows environments finds the device on your network. You can now change the IP's of both interface to conform with your server or storage device IP addressing. within in RAID Admin you can manage the device like any other enterprise level storage device. We are not using the Xserve RAID as a true SAN so will are not managing drive space in that manner. We are using it as NAS on steroids or a limited SAN. Why I say that is b/c we have connected the Xserve RAID to our fiber channel switch and zoned it out to it's host server. Our Windows 2003 server detects the Xserve RAID as if it were an internal drive or a SAN drive.
So far the Xserve RAID is working great. After creating the partition in Windows disk manager and waiting 28 hours for the drives to initialize (yes 28 hours) the device has been working flawlessly. Windows 2003 server sees 5.5 TB out of the 7 TB raw so keep that in mind that you will lose 1.5 TB to overhead. We have populated the device with 2.5 TB or archive data so far and we still have a lot of data left to add. Will we needs another one soon? I hope so b/c these device are very nice to have on your network. It just makes managing and centralizing storage so easy, cheap and simple.
Front of Apple Xserve RAID
Back of Apple Xserve RAID
SPAM
Spam use to be a huge issue for me about 3 years ago. We used a software product called mailsweeper. It worked OK but like an anti virus program the definitions had to be updated by us and rules and exceptions had to be put in as well. It was a headache to manage and took focus away from other areas that needed attention. Every Monday or day after a long weekend we would have well over 65,000 emails caught by mailsweeper. It worked good and worked too good sometime. It would also catch a lot of false positives. We had to sift through all the SPAM just to find blocked emails. It was very time consuming.
These days I don't even think about spam. Spam is a word that I forget exist. We have made our entire company very happy when we had MessageLabs (www.messagelabs.com)take over and filter all of our emails. They have a very robust multilayer filtering system. Spam, virus and porn also images with excessive skin content. One of the best features about it is that the user administers their own blocked emails. If an email is a false positive the user will get an email from messagelabs saying they have blocked emails. In the notification email their is a login issued by messagelabs so they can go in a release or delete the email. They also have a retention period so the emails won't pile up in their system.
How it works is you have to edit your MX records with your ISP to send all your SMTP traffic to messagelabs cluster. Messagelabs will give you a virtual IP which is a cluster of towers that will filter you emails. Once messagelabs has your emails they will process it through their filters and then forward it to your mail server IP. If you are watching your firewall log you will see entries for MLtower## sending emails to your email server. Best practice is that you will want to set your email server to only sent emails to messagelabs and also only receive emails from them also. This can be done in your firewall and should be able to be done on you email server as well. I use exchange so it is there.
Spam for us is a thing of the past. If you are still (LOL) having spam issues in your company check out www.messagelabs.com or www.postini.com I can only speak for ML but I hear postini is nice also.
These days I don't even think about spam. Spam is a word that I forget exist. We have made our entire company very happy when we had MessageLabs (www.messagelabs.com)take over and filter all of our emails. They have a very robust multilayer filtering system. Spam, virus and porn also images with excessive skin content. One of the best features about it is that the user administers their own blocked emails. If an email is a false positive the user will get an email from messagelabs saying they have blocked emails. In the notification email their is a login issued by messagelabs so they can go in a release or delete the email. They also have a retention period so the emails won't pile up in their system.
How it works is you have to edit your MX records with your ISP to send all your SMTP traffic to messagelabs cluster. Messagelabs will give you a virtual IP which is a cluster of towers that will filter you emails. Once messagelabs has your emails they will process it through their filters and then forward it to your mail server IP. If you are watching your firewall log you will see entries for MLtower## sending emails to your email server. Best practice is that you will want to set your email server to only sent emails to messagelabs and also only receive emails from them also. This can be done in your firewall and should be able to be done on you email server as well. I use exchange so it is there.
Spam for us is a thing of the past. If you are still (LOL) having spam issues in your company check out www.messagelabs.com or www.postini.com I can only speak for ML but I hear postini is nice also.
Friday, May 26, 2006
Day 3 of our network upgrade - VLANs
Today I just went through and planned out our IP scheme. Like I mentioned before each floor will be it's own VLAN network. Here is how they will be configured;
16th floor VLAN IP
IP scheme 10.100.16.x / 24
Gateway 10.100.16.1
15th floor VLAN IP
IP scheme 10.100.15.x / 24
Gateway 10.100.15.1
14th floor VLAN IP
IP scheme 10.100.14.x / 24
Gateway 10.100.14.1
4th floor VLAN IP
IP scheme 10.100.4.x / 24
Gateway 10.100.4.1
3rd floor VLAN IP
IP scheme 10.100.3.x / 24
Gateway 10.100.3.1
A lot to think about and thanks to EIGRP that takes care of most of it :D
I ran some test on printers and plotters as well (we do a ton of that here) just to be sure that the print servers in one VLAN can communicate with the printers/plotters on another VLAN. We all know it works but you don't want any surprised come cutover time. This is why we test and test and test some more.
On cutover day I have a lot of sensitive work to do. I need to change my netmask from /16 to /24 that means I need to get to all my servers and firewall objects and SAN devices. Those are critical. These are some of things an MIS has to worry about. Yes worry and stress to the point were you almost crap your pants. You ever get those feelings? LOL!
16th floor VLAN IP
IP scheme 10.100.16.x / 24
Gateway 10.100.16.1
15th floor VLAN IP
IP scheme 10.100.15.x / 24
Gateway 10.100.15.1
14th floor VLAN IP
IP scheme 10.100.14.x / 24
Gateway 10.100.14.1
4th floor VLAN IP
IP scheme 10.100.4.x / 24
Gateway 10.100.4.1
3rd floor VLAN IP
IP scheme 10.100.3.x / 24
Gateway 10.100.3.1
A lot to think about and thanks to EIGRP that takes care of most of it :D
I ran some test on printers and plotters as well (we do a ton of that here) just to be sure that the print servers in one VLAN can communicate with the printers/plotters on another VLAN. We all know it works but you don't want any surprised come cutover time. This is why we test and test and test some more.
On cutover day I have a lot of sensitive work to do. I need to change my netmask from /16 to /24 that means I need to get to all my servers and firewall objects and SAN devices. Those are critical. These are some of things an MIS has to worry about. Yes worry and stress to the point were you almost crap your pants. You ever get those feelings? LOL!
Thursday, May 25, 2006
Day 2 of our network upgrade - setting up the routes
Today the consultant and I just went through the design and made sure we had all the routes in place. We are moving from a network that covers 5 floors but is a single flat network (VLAN 1) to a multi VLAN network. Each of the floors will be it's own VLAN. Think of each floor as their own network. With this configuration I need to make sure all clients can connect to each other, the servers and connect to our London and Shanghai networks. Thank God for EIGRP. Without EIGRP our routing table would be at arms length. Meaning that I would have had to tell the router that every VLAN exist, where they are and what path they need to take to get to their destination. I would have to think like a router while entering all of these routes in the routing table. I don't have to do that with EIGRP.
**What is EIGRP?
Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco proprietary routing protocol based on their original IGRP. EIGRP is a balanced hybrid IP routing protocol, with optimizations to minimize both the routing instability incurred after topology changes, as well as the use of bandwidth and processing power in the router.
Some of the routing optimizations are based on the Diffusing Update Algorithm (DUAL) work from SRI, which guarantees loop-free operation. In particular, DUAL avoids the "count to infinity" behavior of RIP when a destination becomes completely unreachable. The maximum hop count of EIGRP-routed packets is 224.**
Or just look at the pic to get a better idea of what it is and does.
**What is EIGRP?
Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco proprietary routing protocol based on their original IGRP. EIGRP is a balanced hybrid IP routing protocol, with optimizations to minimize both the routing instability incurred after topology changes, as well as the use of bandwidth and processing power in the router.
Some of the routing optimizations are based on the Diffusing Update Algorithm (DUAL) work from SRI, which guarantees loop-free operation. In particular, DUAL avoids the "count to infinity" behavior of RIP when a destination becomes completely unreachable. The maximum hop count of EIGRP-routed packets is 224.**
Or just look at the pic to get a better idea of what it is and does.
Wednesday, May 24, 2006
Day 1 of our network upgrade - Cisco 4506's
We unpacked and placed all the switches in their locations throughout the company. Once we connected the access layer switches to the extra pair of fiber we had to hope we would get a light at the end of the tunnel LOL. No really we had to hope for a light down in the datacenter at the end of the fiber run. When we were done installing the switches we got our lights. Everything went smooth. The new network is running in parallel.
Here is our new core.
2 Cisco 4506's. We will have 2 runs to each access layer switch from the core. This is in preparation for a future VoIP install later on. Your foundation MUST be up to par before you can even think about VoIP.
We are finally moving to a more enterprise level network. Cisco would call it a three-layered hierarchical model or hierarchical internetworking model. It consist of;
-Core layer
-Distribution layer
-Access layer
Our setup will be core/distribution layer (all in the core) and access layer out on the floors.
Here is our new core.
2 Cisco 4506's. We will have 2 runs to each access layer switch from the core. This is in preparation for a future VoIP install later on. Your foundation MUST be up to par before you can even think about VoIP.
We are finally moving to a more enterprise level network. Cisco would call it a three-layered hierarchical model or hierarchical internetworking model. It consist of;
-Core layer
-Distribution layer
-Access layer
Our setup will be core/distribution layer (all in the core) and access layer out on the floors.
Tuesday, May 23, 2006
My Coworker won a MacBook from the apple store in NYC
What a lucky MOFO. If I had won I would have...... given it my fieance or my mother :D. It's funny how he won too. He originally went to the opening and spent 3 hours on line. He got in and IM'ed me at home said he was standing next to celebs and such. Stayed for about an hour and left. He went to hang out on Friday night and went back to the apple store at 5am. Filled out the form and left. He got a call on Sunday saying he won. Congrats!!!
Congrats!
Congrats!
emc Clarion CX300 our storage solution
When we decided to go with a SAN over a year ago we made that decision based on these criteria;
-centralized storage
-redundancy
-easy to manage (if you know what you are doing)
-scalability
-Dell/emc maintenance and support (very good in this area)
-performance
-versatility
We are a Microsoft Windows shop so it was a very easy integration. It took a day to set up and get all the servers configured. Each server needed 2 Host Bus Adapters (HBA), drivers, SANsurfers and emc Powerpath. These are for connectivity management and licensing.
Once the SAN itself was unboxed, racked and firmware was updated it was time to carve it up. Originally we only have the CX300 itself and 1 disk array Enclosure (DAE) we added a second one a few months later. I have a pic of how the LUNs carved up look in paper.
**What are LUNs?
In computer storage, a logical unit number or LUN is an address for an individual disk drive and by extension, the disk device itself. The term originated in the SCSI protocol as a way of differentiating individual disk drives within a common SCSI target device like a disk array.
The term has become common in storage area networks (SAN) and other enterprise storage fields. Today, LUNs are normally not entire disk drives but rather virtual partitions (or volumes) of a RAID set.(wikipedia definition)**
After the carving we were ready to connect the server to the SAN via McData Fiber Channel switches. Here we had to do zoning.
**What is SAN zoning?
SAN zoning is a method of arranging Fiber Channel devices into logical groups over the physical configuration of the fabric.
(seems like this is the best used definition on the web so why reinvent the wheel ;) )**
Zoning on the McData switches are really easy once you get the hang of it. Find the servers WW name and find the storage device WW name and add the two to the same field. This will allow the Windows server to detect a new storage device in disk manager after you point the server to it's LUN on the SAN using emc Navisphere. No rebooting required if all works right. Hit refresh a few times and your new volumes is ready to be partitioned and formatted.
Here is a diagram of the server/SAN setup.
(I'll talk about the apple XServeRAID in a bit)
Seems easy enough but I didn't do it alone. We had the Dell/emc guy with us the whole time. They won't let you perform these task without an engineer onsite. Too easy for something to go wrong.
The benefit of all this is that if my server runs out of space I can easily grow the LUN and manage everything from a single location. Centralized storage is a must in our environment. As much as we want to centralize everything we still can't :/
-centralized storage
-redundancy
-easy to manage (if you know what you are doing)
-scalability
-Dell/emc maintenance and support (very good in this area)
-performance
-versatility
We are a Microsoft Windows shop so it was a very easy integration. It took a day to set up and get all the servers configured. Each server needed 2 Host Bus Adapters (HBA), drivers, SANsurfers and emc Powerpath. These are for connectivity management and licensing.
Once the SAN itself was unboxed, racked and firmware was updated it was time to carve it up. Originally we only have the CX300 itself and 1 disk array Enclosure (DAE) we added a second one a few months later. I have a pic of how the LUNs carved up look in paper.
**What are LUNs?
In computer storage, a logical unit number or LUN is an address for an individual disk drive and by extension, the disk device itself. The term originated in the SCSI protocol as a way of differentiating individual disk drives within a common SCSI target device like a disk array.
The term has become common in storage area networks (SAN) and other enterprise storage fields. Today, LUNs are normally not entire disk drives but rather virtual partitions (or volumes) of a RAID set.(wikipedia definition)**
After the carving we were ready to connect the server to the SAN via McData Fiber Channel switches. Here we had to do zoning.
**What is SAN zoning?
SAN zoning is a method of arranging Fiber Channel devices into logical groups over the physical configuration of the fabric.
(seems like this is the best used definition on the web so why reinvent the wheel ;) )**
Zoning on the McData switches are really easy once you get the hang of it. Find the servers WW name and find the storage device WW name and add the two to the same field. This will allow the Windows server to detect a new storage device in disk manager after you point the server to it's LUN on the SAN using emc Navisphere. No rebooting required if all works right. Hit refresh a few times and your new volumes is ready to be partitioned and formatted.
Here is a diagram of the server/SAN setup.
(I'll talk about the apple XServeRAID in a bit)
Seems easy enough but I didn't do it alone. We had the Dell/emc guy with us the whole time. They won't let you perform these task without an engineer onsite. Too easy for something to go wrong.
The benefit of all this is that if my server runs out of space I can easily grow the LUN and manage everything from a single location. Centralized storage is a must in our environment. As much as we want to centralize everything we still can't :/
Storage, one of he biggest headaches to deal with in this environment.
I work for an architecture firm in NYC where we generate lots of large files. Files from various 3d, CAD and imaging applications. In this environment is is hard to manage the storage. Why? Well here is why.
When a project starts a folder is created in on the file server. This folder has a name or number associated with it to identify the project. For the life of the project everything related to it is stored here (accept emails). These projects last years. I really mean years. Well if you think about it how long does it take to build lets say (for example) a Dam or a Skyscraper. This is how long these files have to be accessible on the network. If the project is active it has to be on the production server. If it's on hold or wrapped up it goes to the archive server. These project folders get to be well over 100GB and that is only 1 of many projects.
Why can't I just delete old files?
That's not my job to be honest. Sounds like a don't care attitude right? Wrong! If I went the lengths to delete every old file even though they are on tape backup I will be restoring files EVERY DAY all day long. It should not fall on my to be in charge of what gets deleted and what doesn't that should be the team that is in charge of the projects job. I only provide the means for them to store their files and work without problems. Nice cop out right ;)
So I and my team of admins have been faced with this problem of the servers filling up year after year. How we use to deal with it was throw disk at the server. At one point we had about 5 files servers of various sizes filling up. One year I thought I was in the clear. We had a 400GB volume on our main file server and it filled up. We purchased an 800GB drive cage for an HP server. So I think to myself and tell my boss we are in the clear for the next 2 years. Well 6 months go by and the 800GB is down to 100GB free. We move inactive projects off to free up space and this goes on for a few more months. We then double that capacity. At the time 800GB was a hell of a lot of space for our size company. When we filled it up I was as shocked as anyone else would be. So now we had 1.6TB. Again we filled that up in 18 months. So in 2 years we ran through over 2 TB (if you include the projects we pulled off to make space). We decide to get an emc CX300 with 2TB production and 4TB for archive. This working out OK for since we got it but we still are running out a space.
We are currently looking for a hierarchical storage management (HSM) solution that integrates well in our environment. No it's not easy to just go get IBM Tivoli, CommVault or Veritas Enterprise Vault <--- (we have this for email and what a pain in the ass it is to set up). We have to make sure that whatever pointer file is left behind can be read by our CAD software. The problem is that our CAD software uses what is called an Xref. An Xref is a bunch of files that are accossiated with the main file you are working on. If I open file building1234.dwg it can call dozens of other files and they will all open due to the Xref. This is the root of the storage problem and the cause behind why a solution isn't easy to find. Say we use an HSM solution to move files older than 30 days to an archive spot, leaving a pointer in place and that file moved is a part of an Xref. If my CAD software cannot read that pointer file we can potentially corrupt the main file and delay a project. I have been explaining to these vendors this situation giving the same example asking them to find out if these HSM products will work with CAD software. Ofcourse they don't test nothing and say yeah it should work so I can buy it. Well if the software costed $30 I would buy it and try it but they cost thousands of dollars. We all know these software works on word files and all the common stuff that you find in the financails, law and medical firms but the architecture firms are always left out. It's like no one knows about us. Maybe we should stop designing buildings.
When a project starts a folder is created in on the file server. This folder has a name or number associated with it to identify the project. For the life of the project everything related to it is stored here (accept emails). These projects last years. I really mean years. Well if you think about it how long does it take to build lets say (for example) a Dam or a Skyscraper. This is how long these files have to be accessible on the network. If the project is active it has to be on the production server. If it's on hold or wrapped up it goes to the archive server. These project folders get to be well over 100GB and that is only 1 of many projects.
Why can't I just delete old files?
That's not my job to be honest. Sounds like a don't care attitude right? Wrong! If I went the lengths to delete every old file even though they are on tape backup I will be restoring files EVERY DAY all day long. It should not fall on my to be in charge of what gets deleted and what doesn't that should be the team that is in charge of the projects job. I only provide the means for them to store their files and work without problems. Nice cop out right ;)
So I and my team of admins have been faced with this problem of the servers filling up year after year. How we use to deal with it was throw disk at the server. At one point we had about 5 files servers of various sizes filling up. One year I thought I was in the clear. We had a 400GB volume on our main file server and it filled up. We purchased an 800GB drive cage for an HP server. So I think to myself and tell my boss we are in the clear for the next 2 years. Well 6 months go by and the 800GB is down to 100GB free. We move inactive projects off to free up space and this goes on for a few more months. We then double that capacity. At the time 800GB was a hell of a lot of space for our size company. When we filled it up I was as shocked as anyone else would be. So now we had 1.6TB. Again we filled that up in 18 months. So in 2 years we ran through over 2 TB (if you include the projects we pulled off to make space). We decide to get an emc CX300 with 2TB production and 4TB for archive. This working out OK for since we got it but we still are running out a space.
We are currently looking for a hierarchical storage management (HSM) solution that integrates well in our environment. No it's not easy to just go get IBM Tivoli, CommVault or Veritas Enterprise Vault <--- (we have this for email and what a pain in the ass it is to set up). We have to make sure that whatever pointer file is left behind can be read by our CAD software. The problem is that our CAD software uses what is called an Xref. An Xref is a bunch of files that are accossiated with the main file you are working on. If I open file building1234.dwg it can call dozens of other files and they will all open due to the Xref. This is the root of the storage problem and the cause behind why a solution isn't easy to find. Say we use an HSM solution to move files older than 30 days to an archive spot, leaving a pointer in place and that file moved is a part of an Xref. If my CAD software cannot read that pointer file we can potentially corrupt the main file and delay a project. I have been explaining to these vendors this situation giving the same example asking them to find out if these HSM products will work with CAD software. Ofcourse they don't test nothing and say yeah it should work so I can buy it. Well if the software costed $30 I would buy it and try it but they cost thousands of dollars. We all know these software works on word files and all the common stuff that you find in the financails, law and medical firms but the architecture firms are always left out. It's like no one knows about us. Maybe we should stop designing buildings.
Thursday, May 11, 2006
Best App
Google Earth
Currently using it to find my next home. Get the address of the property of interest (not always provided on certain real estate sites). Plug the address into Google Earth and you can have an idea of what the neighborhood looks like. You can also get an idea of the lot size without having to visit the property. And you can get an idea of the distance your new home will be in proximity to certain landmarks, malls, super market, theaters, schools, your job, hospital, police, transportation etc. This is how I am currently using Google Earth. What's yours?
Currently using it to find my next home. Get the address of the property of interest (not always provided on certain real estate sites). Plug the address into Google Earth and you can have an idea of what the neighborhood looks like. You can also get an idea of the lot size without having to visit the property. And you can get an idea of the distance your new home will be in proximity to certain landmarks, malls, super market, theaters, schools, your job, hospital, police, transportation etc. This is how I am currently using Google Earth. What's yours?
First entry
I finally decided to create a blog. I took so long b/c you can say I been there done that in the past. 5 years ago I had a website that I spoke about all my hobbies; computers, home theater, paintball and my ride ;) This blog will mostly be about tech stuff.
I'll be posting a lot in the weeks to come.
I'll be posting a lot in the weeks to come.
Subscribe to:
Posts (Atom)