Data Center

Data Center

 

It is a building or dedicated space within a building used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup components and infrastructure for power supply, data communications connections, environmental controls and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. They have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays. A single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design-guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology operations started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of Unix from the early 1970s led to the subsequent proliferation of freely available Linux-compatible PC operating-systems during the 1990s. These were called “servers”, as timesharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term “data center”, as applied to specially designed computer rooms, started to gain popular recognition about this time. The boom of data centers came during the dot-com bubble of 1997–2000.Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide commercial clients with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. Data centers for cloud computing are called cloud data centers (CDCs). But nowadays, the division of these terms has almost disappeared and they are being integrated into the term “data center”. With an increase in the uptake of cloud computing, business and government organizations scrutinize data centers to a higher degree in areas such as security, availability, environmental impact and adherence to standards. Standards documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data-center design. Well-known operational metrics for data-center availability can serve to evaluate the commercial impact of a disruption. Development continues in operational practice, and also in environmentally-friendly data-center design. Data centers typically cost a lot to build and to maintain. Modernization and data center transformation enhances performance and energy efficiency.

IT operations are a crucial aspect of most organizational operations around the world. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of mechanical cooling and power systems serving the data center along with fiber optic cables.

The Telecommunications Industry Association’s Telecommunications Infrastructure Standard for Data Centers[14] specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces,[16] provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:

Operate and manage a carrier’s telecommunication network
Provide data center based applications directly to the carrier’s customers
Provide hosted applications for a third party to provide services to their customers
Provide a combination of these and similar data center applications
Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers, both for now and for later.

Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data is one factor driving the need for data centers to modernize.

In May 2011, data center research organization Uptime Institute reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.

Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.

Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl often includes replacing aging data center equipment,and is aided by standardization.
Virtualize: IT virtualization technologies help to lower capital and operational expenses, and reduce energy consumption. Virtualization technologies are also used to create virtual desktops, which can then be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.
Automating: Automating tasks such as provisioning, configuration, patching, release management and compliance is needed, not just when facing fewer skilled IT workers.
Securing: Protection of virtual systems is integrated with existing security of physical infrastructures.

 

 

Data Centre

 

The physical environment of a data center is rigorously controlled. Air conditioning is used to control the temperature and humidity in the data center. ASHRAE’s “Thermal Guidelines for Data Processing Environments” recommends a temperature range of 18–27 °C (64–81 °F), a dew point range of −9 to 15 °C (16 to 59 °F), and ideal relative humidity of 60%, with an allowable range of 40% to 60% for data center environments. The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer’s specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.

Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. At least one data center will cool servers using outside air during the winter. They do not use chillers/air conditioners, which creates potential energy savings in the millions. Increasingly indirect air cooling is being deployed in data centers globally which has the advantage of more efficient cooling which lowers power consumption costs in the data center. Many newly constructed data centers are also using Indirect Evaporative Cooling (IDEC) units as well as other environmental features such as sea water to minimize the amount of energy needed to cool the space.

Telcordia NEBS: Raised Floor Generic Requirements for Network and Data Centers,GR-2930 presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.

There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringer, stringerless, and structural platforms, all of which are discussed in detail in GR-2930.

This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.

Data centers typically have raised flooring made up of 60 cm (2 ft) removable square tiles. The trend is towards 80–100 cm (31–39 in) void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.

Comment PLZ