
Colocation NOC & VoIP (IMS)
Data Center BUSINESS LEVELS:
From lower to higher levels, each level covers all other possibilities on its subsequent levels:
-
Data Center Construction
-
Carrier Hotel, Massive Co-location (Cages, Suites)
-
Single/Multi Rack Level Co-location, Networking, NOC
-
Server Co-location (from 1 server to 1/4-1/2-1 Rack to a few Racks), Dedicated Server
-
Web Hosts, Application Hosting, Service Providers (VoIP, IPTV, Internet TV)
-
Value Added Resellers
1. DATACENTER CONSTRUCTION
Tier Classification: |
Tier I |
|
Tier II |
|
Tier III |
|
Tier IV |
|
RestonX T-IV |
Active Capacity Components to Support IT Load |
N |
|
N+1 |
|
N+1 |
|
N after any failure |
|
N after any failure |
Distribution Paths |
1 |
|
1 |
|
1 Active 1 Alternative |
|
2 Simultaneously active |
|
2 Simultaneously Active |
Concurrently Maintainable |
No |
|
No |
|
Yes |
|
Yes |
|
Yes |
Fault Tolerance (single event) |
No |
|
No |
|
No |
|
Yes |
|
Yes |
Compartmentalization |
No |
|
No |
|
No |
|
Yes |
|
Yes |
Continuous Cooling (load density dependent) |
* |
|
* |
|
* |
|
Yes (Class A) |
|
Yes (Class A) |
Common attributes found in data centers that are unrelated to tier requirements. |
|||||||||
Building Type |
Tenant |
|
Tenant |
|
Stand-alone |
|
Stand-alone |
|
Stand-alone |
Staffing Shifts staff/shift |
None |
|
1 Shift 1/Shift |
|
1+Shifts 1-2/Shift |
|
“24 by Forever” 2+/Shift |
|
“24 by Forever” 2+/Shift |
Usable for Critical Load |
100% N |
|
100% N |
|
90% N |
|
90% N |
|
90% N |
Initial Build-out kW per Cabinet (typical) |
<1 kW |
|
1-2 kW |
|
1-2 kW |
|
1-3 kW |
|
3.84 kW |
Ultimate kW per Cabinet (typical) |
<1 kW |
|
1-2 kW |
|
>3 kW 2,3 |
|
>4 kW 1,2 |
|
6 kW |
Support Space to Raised-Floor Ratio |
20% |
|
30% |
|
80-90+% |
|
100%+ |
|
200% (2:1) |
Raised-Floor Height (typical) |
12” |
|
18” |
|
30-36” |
|
30-42” |
|
30-42” |
Floor Loading lbs/sf (typical) |
85 |
|
100 |
|
150 |
|
150+ |
|
150+ |
Utility Voltage (typical) |
110, 208, 480 |
|
110, 208, 480 |
|
12-15 kV |
|
12-15 kV |
|
12-15 kV |
Single Points-of-Failure |
Many + Human Error |
|
Many + Human Error |
|
Some + Human Error |
|
Fire, EPO + Some Human Error |
|
Fire, EPO + Some Human Error |
Representative Planned Maintenance Shut Downs |
2 Annual Events at 12 Hours Each |
|
3 Events Over 2 Years at 12 Hours Each |
|
None Required |
|
None Required |
|
None Required |
Representative Site Failures |
6 Failures Over 5 Years |
|
1 Failure Every Year |
|
1 Failure Every 2.5 Years |
|
1 Failure Every 5 Years |
|
1 Failure Every 5 Years |
Annual Site-Caused, End-User Downtime (based on field data) |
28.8 hours |
|
22.0 hours |
|
1.6 hours |
|
0.8 hours |
|
0.8 hours |
Resulting End-User Availability Based on Site-Caused Downtime |
99.67% |
|
99.75% |
|
99.98% |
|
99.99% |
|
99.99% |
Typical Months to Plan and Construct |
3 |
|
3-6 |
|
15-20 |
|
15-30 |
|
15-30 |
First Deployed |
1965 |
|
1970 |
|
1985 |
|
1995 |
|
2006+ |
* For additional information on Continuous Cooling, refer to the white paper Continuous Cooling is Required for Continuous Availability.
[1] 3.5 kW per cabinet over large areas is acceptable for traditional air-cooling designs.
[2] Higher kW/cabinet densities require a greater ratio of support space to computer floor (at least 1:1 at 3kW/cabinet, 2:1 at 6kW/cabinet, 3:1 at 9kW/cabinet, etc.) Generally, deeper raised floors are required for higher densities.
[3] Most sites have a difficult time maintaining stable and predictable cooling for racks in the 1-2 kW range. Major process improvements are required before entertaining rack densities above 2 kW. See Institute white paper, How to Meet “24 by Forever” Cooling Demands for your Data Center.
Tier Classification © 2015 Uptime Institute, Inc.
Data Center Construction costs:
Data Center Construction for Tier: |
Tier I |
|
Tier II |
|
Tier III |
|
Tier IV |
Approximate Minimum 15000 SF Construction Cost/SF: (2001) * |
$450 |
|
$600 |
|
$900 |
|
$1100+ |
* The Price requires justification for expensive locations like NYC, Washington DC and etc.
Outsourcing Data Center Needs vs. Building an In house Solution:
SMB (Small or Medium Size Business): |
In-House |
|
Data Center |
Power, building a “modern” power design, including redundant and back up power |
-1 |
|
+1 |
Ping, or ability to choose bandwidth providers |
-1 |
|
+1 |
Pipe, heating and venting (HVAC) |
-1 |
|
+1 |
Affordable real estate or length of lease agreement |
-1 |
|
+1 |
Proximity to fiber optic lines |
-1 |
|
+1 |
Scalability including space and power |
-1 |
|
+1 |
Staffing Needs |
-1 |
|
+1 |
Protection from flood, fire and other natural disasters |
-1 |
|
+1 |
Time-to-market |
-1 |
|
+1 |
Security |
-1 |
|
+1 |
Accessibility and distance from major metropolitan areas |
+1 |
|
-1 |
24x7x365 access to systems |
+1 |
|
+0 |
Cost |
-1 |
|
+1 |
Ease of monitoring equipment |
+1 |
|
+1 |
On-site customer support |
+1 |
|
+0 |
Fire suppression capabilities |
-1 |
|
+1 |
Total: |
-8 |
|
+12 |
2. CARRIER HOTEL, MASSIVE CO-LOCATION (CAGES, SUITES):
A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression), and special security devices. It is obvious achieving applicable online availability for Internet servers is quiet out of in-house server deployment scope for entities, thus data centers are utilized for this purpose.
A carrier hotel, also called a co-location center, is a secure physical site or building where data communications and media converge are interconnected. It is common for numerous service providers to share the facilities of a single carrier hotel. This minimizes overhead and optimizes communications efficiency for all participants as long as the infrastructure is sufficient to handle all the data at times of peak demand. A carrier hotel is a sizable facility and clients cannot circumvent to lower prices through networking data.
Co-location refers to the provision of space for a customer's communications equipment on the service provider's premises. For example, the owner of a Web site can place the site's servers on the premises of an Internet service provider (ISP). A carrier hotel provides co-location on a massive scale, offering various services to customers ranging from modest-sized racks to dedicated rooms or groups of rooms. Some carrier hotels offer hardware and software installation, maintenance and update services. A carrier hotel may also house a meeting room where human representatives for all the companies or guests served by the facility can exchange information and ideas of common interest.
Carrier Neutral, a Carrier Hotel is usually Carrier Neutral, thus direct co-location customers of carrier hotels can use any carrier for their Internet connections and bandwidth usage and bring any carrier to the Data Center according to their requirements, one important feature of carrier hotels is that, they do not compete with their customers' businesses and their agreements prevents them to engage into similar business activity for their subsequent level service providers, they usually sell Cage size co-location space and large scale co-location, other Data Centers usually provide hosting and dedicated server and other Internet services too and actually are competing with their co-location clients.
RestonX Northern Virgina Data Center: |
Facility |
Total Building Size |
285,000 SF |
Data Center Tier |
IV |
Data Center Type |
Carrier Hotel |
Colocation Space |
Premier |
Raised floor cage space |
Up to 30,000 SF |
Single cabinet colocation |
YES |
SAS 70 Certified Facility |
In process |
Colocation Infrastructure |
Superior |
Power |
Diverse power from separate substations |
UPS watts/square foot |
Up to 200 |
AC UPS Power |
N+1 |
Emergency Generator power |
N+1 |
Power Types |
AC/DC |
Fire Suppression System |
Early Smoke detection and dry-pipe pre-action fire suppression system |
Secured facility |
YES |
Secure Site access |
24x7 |
Security Staffing |
24x7 |
Optical turnstiles |
YES |
Setback |
50-foot |
CCTV Surveillance |
Internal and External |
Remote Hands Services |
24x7 |
Certified Technicians |
YES |
Turn-Key Equipment Installation |
YES |
Operations Outsourcing |
YES |
Peering opportunities: |
|
Office Space |
First Class |
Executive Offices |
Yes |
Custom Office Suites |
Up to 148,000 SF |
Disaster Recovery work are office space |
YES |
Satellite Dish allowed |
Up to 1 m on the roof top |
Power monitoring technology |
BCM (Branch Circuit Monitoring) |
3. SINGLE/MULTI RACK LEVEL CO-LOCATION, NETWORKING, NOC
Total Cost of Ownership, Co-location |
Item |
|
Specification |
|
Price |
|
Commitment |
Colocation Space |
Suite |
|
400 to 500 SF |
|
$15to$35 / SF / month |
|
1 |
Cage |
|
100 SF (block) |
|
$35 to $45 / SF / month |
|
Block of 100 SF |
|
Cabinet |
|
|
|
|
|
1 |
|
Space |
|
Per SF |
|
$35 / SF /month |
|
30 SF |
|
Setup fee |
|
One time |
|
|
|
|
|
Satellite Station |
Dish Space |
|
Up to 1m diameter at the roof top |
|
$500/month |
|
1m |
Cabling |
|
Setup fee: Per linear foot (from Dish to Cabinet) |
|
$400 first linear foot $250 additional linear foot |
|
|
|
Dedicated Bandwidth |
Premium |
|
Per mbps |
|
$30/mbps 95%tile |
|
10/100 mbps |
Value Colo |
|
Per mbps |
|
$15-$20/mbps 95%tile |
|
10/100 mbps |
|
Low cost |
|
Per mbps |
|
$8-$14/mbps 95%tile |
|
10/100 mbps |
|
Peering Bandwidth (Optional) |
Any2IX |
|
20 amp |
|
$200 |
|
Having dedicated bandwidth + 1 year commitment |
Power |
AC 110 |
|
20 amp |
|
$200 |
|
20amp |
Legal Fees |
General liability + Damage waiver Insurance |
|
$3,000,000 liabilities + $2,000,000 damages |
|
$500-$1,000 /month* |
|
$3,000,000 cover + $2,000,000 up (annual) |
Attorney Fees |
|
Hourly Basis |
|
$250/hour |
|
4 hours/year |
|
Software/Security/Control Panel |
IT Security |
|
Per server/perimeter |
|
|
|
|
Switches |
100/100 base |
|
|
|
|
|
|
Cables |
Category 7A |
|
10/100/1000 mbps |
|
|
|
|
Rack |
19” 4 post rack |
|
44U Hp/Dell |
|
Asset: $550/Rack |
|
1 |
Server |
IT Server |
|
5 years projected |
|
$1000-$3400 |
|
|
* Depends on Business
In the data-centric enterprises of today there is a need for more processing, data, and connectivity in less floor space. Storage is more compact, servers are now available in blades with multiple servers within one chassis, and likewise, connectivity is responding with high-density solutions. The cost per square inch of utilization within the data center equates to prime real estate. With new demands and mandates for redundancy, companies are faced with these expenses not only for one data center, but, in many instances, two. The decision to in-source or outsource the redundancy is based on several factors, with cost being a prime consideration.
Some experts estimate that by the end of the century there will be one terabyte of data stored for each and every person on earth. So how is it that with these increases in storage can the overall real estate required shrink? The answer is clear. More data fits in less space; more instructions are processed in the same chip space used 18 months ago. There is a fine art to planning capacity, growth, space, and power. According to the "Global Data Center Report," there are millions of square feet of data center space available from various data center providers. There are lessons to be learned from their art of capacity planning regardless of the size of your business or the Tier rating of your data center.
Data centers are rated in four tiers. Each tier represents a level of redundancy and usage. Each level of redundancy requires additional space. Which brings us to the real need for high density. The size of the data center depends on the number of persons served. The BICSI (Building Industry Consulting Service International, Inc.) Telecommunications Distribution Methods Manual recommends that for each square foot of work area space (100 square feet), equipment rooms shall contain 0.75 square feet of equipment room space, with an additional 0.25 square feet for each 250 square feet building automation zone. Hospitals, hotels, and other multi-tenant environments may require additional square footage in the data center to accommodate future growth and expansion. Companies should anticipate and plan for future growth to ensure the main data facility has enough ports available and space allocated to add additional personnel and resources to their network.
For instance if a building provides 20,000 square feet of usable floor space, this provides roughly 200 work areas and will require 170 square feet of equipment room space including building automation space. If a tenant is renting space at $40.00 per square foot, the price tag on the equipment room alone is $6,800.00. The area of one rack (19" x 24" deep) plus clearances (3 feet in front of the rack plus 2 feet to the rear) equates to $442.40 in floor space per rack. Note that no side clearances are included, as they would only apply to the end racks. This figure does not include heating and cooling, monitoring, etc. A typical rack has about 42 RU (1.5" per Rack Unit). Each RU then costs $10.53 in this example. Obviously, rental space comes in a variety of prices, but it is clear to see that these areas quickly become high dollar spaces.
The equipment that resides in the rack with its related maintenance further adds to the figure above. However, protection from downtime and its associated costs make these data centers some of the most valuable assets within a company. There is a finite amount of expandability outwards or upwards, so the best solution is to utilize each rack unit to its fullest potential. The same can be said for racks in intermediate cross-connects and other distribution areas. This floor space cannot be used for work areas, and therefore, the equipment productivity is the only revenue contributor throughout the areas.
The industry is cognizant of the costs and has responded with new solutions to the old space problem. It is important to understand that many of the data centers in use today were designed for yesterday's systems and system needs. New applications, data requirements and telecommunication needs are now filling every conceivable inch. However, there is new promise with high-density blade servers, high-density switches and high-density connectivity components. Each RU (Rack Unit) is now capable of housing more ports and processing power than ever before. Additional enhancements have been made through standardization of equipment airflow, eliminating some of the legacy equipment's requirements for air space above and/or below the equipment, increasing the density of the racks as well. IDC expects the value of the blade server market to be $3 billion by the year 2005. Older servers required as many as 5 RU to operate; some won't even fit in racks and require their own dedicated floor space. The costs from the examples above make the rental tag on the 5RU server's space $52.65. With some of the new blade servers, 12 servers can now fit in the same 5 rack units, lowering the space rental costs of each server to $4.39.
Switches have also increased in port density per rack unit. The older 24-port switches have been updated and now many offer as many as 96 ports in roughly the same 2-unit space. KVM (Keyboard/Video/Mouse) switches have also entered the data center with support for as many as 128 ports in a single box, eliminating the need for separate input/output interfaces to multiple servers. If using the same costs above, for an old 24-port switch residing in 2 rack units, the costs per port for rental space is $0.88 per port. The 96-port equivalent carries a space rental cost of $0.22 per port, a 4X reduction. One key consideration is the trend by many equipment manufacturers to utilize RJ-21 or mini RJ-21x connectors which terminate to 10/100 2-pair connections. While this may seem like a cost and space savings, should a company upgrade their blade servers to gigabit, re-cabling to a standards-based 4-pair connection would be inevitable. Additional cables will need to be added to support the migration from 2-pair 10/100BASE-T to 4-pair 1000BASE-T or higher applications.
The same applies to new connectivity products on the passive side. Another by-product of high-density products is lower overall space costs for companies that have several non-terminated inactive ports. Industry standards recommend that the horizontal cabling in a data center be installed with enough capacity for future growth so that the horizontal can remain unchanged throughout the life of the system. Dark fiber or unused ports consume space even though it is not immediately productive.
New high-density patch panels lower the cost of non-productive space by placing more connections within the same rack space. When selecting a high-density patch panel, there are more important considerations than how many ports fit into a rack space. Too much density can actually decrease the throughput in your environment through the coupling of noise from one connector to another within the same patch panel. Greater noise equals lower bandwidth, more errors, more retransmissions, and will negate any operational savings through loss of productivity.
Another industry innovation that compliments the trend toward higher density is the new BladePatch™ patch cord, which has a push pull latching mechanism. This new patch cord provides the benefit of eliminating the RJ45 latch, thereby reducing finger space required to depress the latch. The result of this new push pull RJ45 design is easier access and faster moves, adds and changes.
Whatever the innovation for your area, there are a few key considerations. Quality should be paramount. This is not an area to scrimp on equipment or connectivity, leaving you penny wise and dollar foolish. The number of users connected, the costs associated with downtime and the potential increase in cost for purchasing commodity or lower-end equipment will eradicate any savings originally realized. When figuring total cost of ownership and return on investment for this equipment, realize its importance to your company's operations. Don't rely on a single vendor for advice. You should comparison shop with multiple contenders and evaluate based upon the best values and not price alone. Being price conscious matters, but not at the expense of quality and reliability. Watch for mean time before failure numbers. Understand ALL of their comparative numbers and what each means. Buying quality equipment may mean the difference between you spending a week in Tahiti or a week in your new data center.
4. SERVER CO-LOCATION (FROM 1 SERVER TO 1/4-1/2-1 RACK TO A FEW RACKS), DEDICATED SERVER
Total Cost of Ownership, Co-location |
Item |
|
Specification |
|
Price |
|
Commitment |
Colocation |
Per Server |
|
1 to 2U + 1 TB bandwidth (1.5 amp) |
|
Minimum $99/month + setup fee |
|
* Without service or control panel |
Per Rack |
|
100 SF (block) |
|
$550/month + Setup fee |
|
44 U/Bandwidth is not included |
|
Dedicated Server |
Per Server |
|
1-2U + 1 TB bandwidth |
|
$145/month + Setup fee |
|
|
5. WEB HOSTS, APPLICATION HOSTING, SERVICE PROVIDERS (VOIP, IPTV, INTERNET TV)
Total Cost of Ownership, Co-location |
Item |
|
Specification |
|
Price |
|
Commitment |
Web Space |
Per Account |
|
|
|
$2.45 to $19.95/month |
|
* With Control Panel and bandwidth about 300 account per server |
File Server |
|
|
|
+ bandwidth charges |
|
|
|
Virtual Private Server |
Per Virtual Server |
|
|
|
$19.95+/month |
|
* control panel not included |
Service Centric |
|
|
|
|
|
|
|