Archive

Archive for the ‘DataCenter’ Category

Natural gas plants cool data centres as a by-product in an audacious winner for Uptime’s Green IT #CRE

April 11, 2013 Leave a comment

 

Liquid Gas Plants Could Power Data Centres For Nothing

Natural gas plants could power data centres for nothing

An ingenious proposal to locate data centres near liquid natural gas (LNG) plants could provide all their cooling and electrical power for nothing – and the group behind it hopes to interest European providers in the concept.

Natural gas storage plants produce excess refrigeration, and waste enough energy to run a data centre according to TeraCool. The plan is being looked at by LNG plant owners in several countries, and has won the Audacious Idea prize in the 2013 Green Enterprise IT Awards 2013 from the Uptime Institute

 

Free energy from gas plants

Natural gas is typically liquefied at the places where it is extracted from the ground and transported in liquid form in tankers to LNG plants where it is stored in giant cryogenic tanks. When needed, it is turned back into a gas (vaporised) for circulation in the gas supply network.

Liquid gas stores energy, and the vaporisation process releases that energy, while also producing very low temperatures. However, this energy and cooling normally go to waste, because LNG plants are situated away from population centres which could use them.

“We think there is a tremendous opportunity,” Bob Shatten, president of TeraCool, told TechWeekEurope. “We have had interest from some LN G terminals – now we need to get the data centre world to step outside of the box and align their interests at one of these locations.”

TeraCool proposes that data centres could be built near the LNG terminals – as long as there are good data connections, and improve the energy efficiency of both sites. The waste heat from the data centre’s servers could help vaporise the gas, and the energy released could power the data centre.

“The system adds an additional refrigeration loop to the circuit in which the refrigerant is pressurized, warmed and vaporised,” explains the Institute’s citation. “The expanding refrigerant drives a turbine coupled to a generator to produce electricity in a combustion-free, emissions-free process.”

LNG plants are often huge, and some could easily provide enough power for the largest data centres ever built, said Shatten. “The upper bound for data centres is around 90MW in the US – and one terminal we looked at in Seoul, Korea, has 22 storage tanks, and could provide 350MW of cooling and 87MW of electricity.

Since a 90MW data centre, operating at a PUE efficiency figure of 1.3, would only need around 27MW of cooling, that’s easily enough.

The limitations of the idea are the variation in the amount of gas being vaporised. The data centre would only be able to rely on the steady output of the plant (the “minimum gas sendout”). However, as Shatten points out, gas is a transition fuel from coal, and is increasingly used in generators which power the base load of countries’ national grids.

European data centres are migrating northwards to some extent at the moment, to countries like those in the Nordic region, where cooling can be had for nothing from the surrounding air. Co-location with LNG plants could be particularly useful in hot countries in Southern Europe, like Spain and Portugal, where cooling is harder to come by.

“Once this happens, it will happen fast.” said Shatten. “The data centre has very little to lose by trying this – it can get its money back very quickly on the energy savings. ”

Uptime Top Ranking

The other Green Enterprise IT awards went to actual data centers – but most featured liquid cooling. The University of Leeds was recognised for using British company Iceotope’s liquid cooling system for the servers in its its high performance computing (HPC) system.

Interxion won a “retrofit” award for a system that uses sea water to cool multiple data centres in Stockholm, and then uses the warm water to heat local offices before returning it to the sea. The firm says it reduced its energy needs by 80 percent and got its PUE down to 1.09.

Other winners include the US National Center For Atmospheric Research, which achieved a PUE of 1.08 in a new high-performance computing (HPC) facility, and a design innovation award went to TD Bank, which included rainwater harvesting and onsite generation in a data centre.

 

Gas-Cooled Data Centre Idea Wins Green Prize.

Advertisements

Why Consider a Modular Data Center? #DataCenter #cre #ccim

April 10, 2013 Leave a comment

data_center_interior_lit1_large

via Why Consider a Modular Data Center? » Data Center Knowledge.

By: Bill Kleyman

This is the thirds article in the Data Center Knowledge Guide to Modular Data Centers series. The initial black eye for containers and the modular concept was mobility. The Sun Blackbox was seen on oil rigs, war zones and places a data center is typically not found. As an industry of large brick and mortar facilities that went to all extremes to protect the IT within, the notion of this data center in a box being mobile was not only unattractive, but laughable as a viable solution. What it did do however, was start a conversation around how the very idea of a data center could benefit from a new level of standardizing components and delivering IT in a modular fashion around innovative ideas.

Faced with economic down-turn and credit crunches, business took to modular approaches as a way to get funding approved in smaller amounts and mitigate the implied risk of building a data center. Two of the biggest reasons typically listed for the problem with data centers are capital and speed of deployment. The traditional brick and mortar data center takes a lot of money and time to build. Furthermore, the quick evolution of supporting technologies further entices organizations to work with fast and scalable modular designs. Outside of those two primary drivers there are many benefits and reasons listed for why a modular data center approach is selected.

Design

• Speed of Deployment: Modular solutions have incredibly quick timeframes from order to deploy¬ment. As a standardized solution it is manufactured and able to be ordered, customized and delivered to the data center site in a matter of months (or less). Having a module manufactured also means that the site construction can progress in parallel, instead of a linear, dependent transition. Remem¬ber, this isn’t a container — rather a customizable solution capable of quickly being deployed within an environment.

• Scalability: With a repeatable, standardized design, it is easy to match demand and scale infrastructure quickly. The only limitations on scale for a modular data center are the supporting infrastructure at the data center site and available land. Another characteristic of scalability is the flexibility it grants by having modules that can be easily replaced when obsolete or if updated technology is needed. This means organizations can forecast technological changes very few months in advance. So, a cloud data center solution doesn’t have to take years to plan out.

• Agility: Being able to quickly build a data center environment doesn’t only revolve around the abil¬ity to scale. Being agile with data center platforms means being able to quickly meet the needs of an evolving business. Whether that means providing a new service or reducing downtime — modular data centers are directly designed around business and infrastructure agility. Where some organizations build their modular environment for the purposes of capacity planning; other organizations leverage modular data centers for their highly effecitve disaster recovery operations.

• Mobility and Placement: A modular data center can be delivered where ever it is desired by the end user. A container can claim ultimate mobility, as an ISO approved method for international transporta¬tion. A modular solution is mobile in the sense that it can be transported in pieces and re-assembled quickly on-site. Mobility is an attractive feature for those looking at modular for disaster recovery, as it can be deployed to the recovery site and be up and running quickly. As data center providers look to take on new offerings, they will be tasked with stay¬ing as agile as possible. This may very well mean adding additional modular data centers to help support growing capacity needs.

• Density and PUE: Density in a traditional data center is typically 100 watts per square foot. In a modular solution the space is used very efficiently and features densities as much as 20 kilowatts per cabinet. The PUE can be determined at commissioning and because the module is pre-engineered and standardized the PUE’s can be as low as 1.1–1.4. The PUE metric has also become a great gauge of data center green efficiency. Look for a provider that strives to break the 1.25 –1.3 barrier or at least one that’s in the +/- 1.2 range.

• Efficiency: The fact that modules are engineered products means that internal subsystems are tightly integrated which results in efficiency gains in power and cooling in the module. First generation and pure IT modules will most likely not have efficiency gains other than those enjoyed from a similar con¬tainment solution inside of a traditional data center. Having a modular power plant in close proximity to the IT servers will save money in costly distribution gear and power loss from being so close. There are opportunities to use energy management platforms within modules as well, with all subsystems being engineered as a whole.

• Disaster Recovery: Part of the reason to design a modular data center is for resiliency. A recent Market Insights Report 2 conducted by Data Center Knowledge points to the fact that almost 50% of the surveyed organizations are looking at disaster recov¬ery solutions as part of their purchasing plans over the next 12 months. This means creating a modular design makes sense. Quickly built and deployed, the modular data center can be built as a means for direct disaster recovery. For those organizations that have to keep maximum amounts of uptime, a modular architecture may be the right solution.

• Commissioning: As an engineered, standardized solution, the data center module can be commis¬sioned where it is built and require fewer steps to be performed once placed at the data center site.

• Real Estate: Modules allow operators to build out in increments of power instead of space. Many second generation modular products feature evaporative cooling, taking advantage of outside air. A radical shift in data center design takes away the true brick and mortar of a data center, placing modules in an outdoor park, connected by supporting infrastructure and protected only by a perimeter fence. Some modular solutions offer stacking also — putting twice the capacity in the same footprint.

Operations

• Standardization: Seen as a part of the industrialization of data centers the modular solution is a standardized approach to build a data center, much like Henry Ford took towards building cars. Manufactured data center modules are constructed against a set model of components at a different location instead of the data center site. Standardized infrastructure within the modules enable standard operating procedures to be used universally. Since the module is prefabricated, the operational procedures are identical and can be packaged together with the modular solution to provide standardized documentation for subsystems within the module.

• DCIM (Data Center Infrastructure Management): Management of the module and components within is where a modular approach can take advantage of the engineering and integration that was built into the product. Many, if not all of the modular products on the market will have DCIM or management software included that gives the operator visibility into every aspect of the IT equipment, in-frastructure, environmental conditions and security of the module. The other important aspect is that distributed modular data centers will now also be easier to manage. With DCIM solutions now capable of spanning the cloud — data center administrators can have direct visibility into multiple modular data center environments. This also brings up the ques¬tion of what’s next in data center management.

• Beyond DCIM – The Data Center Operating System (DCOS): As the modular data center market matures and new technologies are introduced, data center administrators will need a new way to truly manage their infrastructure. There will be a direct need to transform complex data center operations into simplified plug & play delivery models. This means lights-out automation, rapid infrastructure assembly, and even further simplified management. DCOS looks to remove the many challenges which face administrators when it comes to creating a road map and building around efficiencies. In working with a data center operating system, expect the following: – An integrated end-to-end automated solution to help control a distributed modular data center design. – Granular centralized management of a localized or distributed data center infrastructure. – Real-time – proactive – environment monitoring, analysis and data center optimization. – DCOS can be delivered as a self-service automa¬tion solution or provided as a managed service.

Enterprise Alignment

• Rightsizing: Modular design ultimately enables an optimized delivery approach for matching IT needs. This ability to right-size infrastructure as IT needs grow enables enterprise alignment with IT and data center strategies. The module or container can also provide capacity when needed quickly for projects or temporary capacity adjustments. Why is this important? Resources are expensive. Modular data centers can help right size solutions so that resources are optimally utilized. Over or under provisioning of data center resources can be extremely pricey — and difficult to correct.

• Supply Chain: Many of the attributes of a modular approach speak to the implementation of a supply chain process at the data center level. As a means of optimizing deployment, the IT manager directs ven¬dors and controls costs throughout the supply chain.

• Total Cost of Ownership: – Acquisition: Underutilized infrastructure due to over-building a data center facility is eliminated by efficient use of modules, deployed as needed. – Installation: Weeks and months instead of more than 12 months. – Operations: Standardized components to sup¬port and modules are engineered for extreme-efficiency. – Maintenance: Standardized components enable universal maintenance programs. Information technology complies with various internal and external standards. Why should the data center be any different? Modular data center deployment makes it possible to quickly deploy standard¬ized modules that allow IT and facilities to finally be on the same page.

Categories: DataCenter

To Cloud or Not to Cloud? | Cloud Computing Journal | #CRE #CCIM #SIOR #DATACENTER #CLOUD

April 2, 2013 Leave a comment

Today’s IT infrastructure is in the midst of a major transformation. In many ways, the data center is a victim of its own success. The growing number of technologies and applications residing in the data center has spawned increasing complexity, which makes IT as a whole less responsive and agile. While businesses are focused on moving faster than ever, large and complex infrastructure is inherently rigid and inefficient.

As a result, IT is moving outside the traditional data center into colocation facilities and cloud infrastructures – essentially Infrastructure Anywhere. The move to Infrastructure Anywhere is driven by the core objective of improving responsiveness and agility and reducing costs. For example, you can scale up resources through the cloud in minutes, not months. But for all of its benefits, this new Infrastructure Anywhere model presents critical challenges.

To make smart decisions about where to run applications and what kind of resources you need, you first must understand your workload: utilization, capacity, and cost. Gaining unified visibility is difficult when your application workloads are distributed across data centers and colocation facilities in different parts of the country or around the world. With limited visibility, how do you accurately align resources and capacity with workloads for efficient processing, cost control, and – more important – achieve the full business value of your IT investment?

It’s All About Agility
According to the results of Sentilla’s recent survey of data center professionals about their plans for cloud deployments, agility and flexibility are the top drivers behind enterprise IT transformation initiatives such as cloud deployments – followed closely by issues of capacity and cost.

Figure 1: Key drivers for cloud computing initiatives

While agility is the prime motivating factor, the importance of cost as a factor should not be ignored. According to the survey, the major resource limitation experienced by respondents – for all infrastructure initiatives – is budget.

Figure 2: Resource limitations

Note that several of the reported constraints (personnel, storage capacity) are related to the broader issue of budget. In this sense, cost is overwhelmingly the most important constraint on IT initiatives – including cloud initiatives.

2013 Is for Planning, 2014 for Deployment
Of the organizations surveyed, nearly 50 percent plan to be deploying cloud initiatives in 2014. Many are in the planning phases. In general, we can expect cloud computing deployments to increase by 70 percent in 12 months:

Figure 3: Data center cloud initiatives, by year

Similarly, those surveyed expect to gradually migrate more workloads to cloud platforms in the coming years – with 28 percent planning to run more than half of their applications in the cloud by 2014. The barriers to cloud migration are lowering.

Figure 4: Percentage of applications planned to move to the cloud, by year

The Cloud Isn’t a Homogeneous Place
Cloud computing can refer to several different deployment models. At a high level, cloud infrastructure alternatives are defined by how they are shared among different organizations.

Figure 5: Where respondents planned to deploy cloud initiatives

Private clouds offer the flexibility of the elastic infrastructure shared between different applications, but are never shared with other organizations. Hosted on dedicated equipment either on-premises or in a colocation provider, a private cloud is the most secure but the least cost-effective cloud model.

Public cloud infrastructure offers similar elasticity and scalability and is shared with many organizations. This model is best suited for businesses requiring managing load spikes and scaling to a large number of users without a large capital investment. Amazon Web Services (AWS) is perhaps the most widely deployed example of public cloud infrastructure as a service.

Hybrid cloud offers the dual advantages of secure applications and data hosting on a private cloud and the cost benefits of keeping sharable applications and data on the public cloud. This model is often used for cloud bursting – the migration of workloads between public and private hosting to handle load spikes.

Community cloud is an emerging category in which different organizations with similar needs use a shared cloud computing environment. This new model is taking hold in environments with common regulatory requirements, including healthcare, financial services and government.

The research showed that organizations are evaluating a broad range of different cloud solutions, including Amazon AWS, Microsoft Azure, Google Cloud Platform, and Red Hat Cloud Computing, as well as many solutions based on OpenStack, the open source cloud computing software.

Without planning, ad hoc cloud deployments combined with islands of virtualization will only add complexity to the existing data center infrastructure. The resulting environment is one of physical, virtual and cloud silos with fragmented visibility. While individual tools may deliver insight into specific parts of the infrastructure puzzle (physical infrastructure, server virtualization with VMware, specific infrastructure in a specific cloud provider), IT organizations have little visibility into the total picture. This lack of visibility can impede the IT organization’s ability to align infrastructure investments with business needs and cost constraints.

Infrastructure Complexity Is the New Normal
While it aims to bring agility to IT, the process of cloud transformation will only increase infrastructure complexity in the near term. IT organizations must manage a combination of legacy systems with islands of virtualization and cloud technologies.

When asked about where cloud infrastructure will reside, survey respondents indicated that they will be managing a blend of on-premises and outsourced infrastructure, with the balance shifting dramatically from 2013-2014.

Figure 6: Where cloud infrastructure will reside, by year

The Need for Unified Visibility into Complex Infrastructure
As you plan your own cloud initiatives, you must prepare for multiple phases of transformation:

  • Deploying new applications to the cloud as part of the broader application portfolio
  • Migrating existing applications to cloud infrastructure where possible and appropriate
  • Managing the hybrid “infrastructure anywhere’ environment during the transition and beyond

To support these phases, you need visibility into workloads and capacity across essential application infrastructure – no matter where it resides. From the physical and virtual resources, up through applications and services, you will need insight so you can align IT with business objectives.

Figure 7: The need for infrastructure insight at all levels

Essential Infrastructure Metrics for Right-Sizing Infrastructure
Decisions about which applications to deploy to the cloud and where to deploy them will require visibility into:

  • Historical, current and predicted application workload
  • Current and predicted capacity requirements of the workload
  • Comparative cost of providing that capacity and infrastructure on different platforms

For application migration scenarios, you will need to understand the actual resource consumption of the existing application. Whether it’s a new application or a migrated one, you will need to ‘right-size’ the cloud infrastructure to avoid the twin dangers of over-provisioning (and wasting financial resources) or under-provisioning and risking outages or performance slow-downs. You will need insight into:

  • Memory utilization
  • CPU utilization
  • Data transfer/bandwidth
  • Storage requirements

You will also need good metrics about the cost of running the application in your existing data center, as well as the predicted costs of running that same application on various platforms. These metrics need to factor in the total cost of the application, such as:

  • Personnel for supporting the application
  • Operating system
  • Management software
  • Cooling and power
  • Leased cloud
  • Server and storage hardware

To accurately predict the cost of running the application on cloud-based infrastructure, you will need accurate metrics around the actual, historical resource consumption of the application (storage, memory, CPU, etc.) as it maps to billable units by the provider. By understanding the actual consumption, you can avoid over-provisioning and overpaying for resources from external providers.

Infrastructure Analytics for Cloud Migration
For any given application that is a candidate for the cloud, you want to be able to compare the total cost of the resources required across different options (public, private and on-premise).

While you could try to manually crunch these numbers in a spreadsheet, the computations are not trivial. These are decisions that you will need to make repeatedly, for each application, and revisit when an infrastructure provider changes its cost model or fee structure. For that reason you’ll want a tool that lets you get an accurate and continuous view into current costs and model “what-if” scenarios for different deployments.

Continuous Analysis for Continuous Improvement of the “Infrastructure Anywhere” Environment
Before deploying an application, what-if scenarios help you make sound resource decisions and right-size applications. After deploying, continuous analysis is key to ensuring that you are optimizing capacity and using resources most efficiently.

While individual tools may already give you slices of the necessary information, you need integrated insight into the complete infrastructure environment. Again, emerging infrastructure intelligence can assemble necessary information from applications and assets that are both on and off your premises, virtualized and not, in different platforms and locations.

Figure 8: Transformational analytics

The software can provide ‘single pane of glass’ visibility into assets and applications throughout the physical, virtual, and cloud infrastructure, including:

  • Application cost/utilization spanning different locations
  • True resource requirements of apps (for more accurate provisioning in cloud infrastructure)
  • CPU and memory utilization of apps, wherever they reside

Summary
By 2014, enterprise computing will look quite different than it does today, yet many legacy systems and infrastructure will still be with us. IT operations, business units and application architects will need to manage applications that reside in infrastructure that spans on-premise and offsite locations, with public, private, hybrid, and community cloud infrastructure. Data centers will be just one part of the total pool of infrastructure that IT manages on behalf of the business.

To manage this transformation, you will need to make smart decisions about where workloads should reside based on specific application and business needs. As these changes roll out, you will need to manage the transforming and hybrid application infrastructure to deliver the necessary performance and service levels, no matter where applications reside.

IT organizations need the insight to make fast, smart and informed decisions about where workloads and data should reside and how to deploy new applications. Rather than isolated silos of metrics and capacity and utilization data, IT needs unified visibility into infrastructure across the virtual computing environment – both on-premise and off. And they need the metrics and continuous analysis to manage the evolving infrastructure in a manner aligned with business objectives.

An emerging category of infrastructure intelligence can provide the continuous and unified analytics necessary to understand and compare your decisions and to manage the data center during the transformation. With broad ‘infrastructure insight’ you can align cloud platforms with business needs and cost requirements – delivering the agility to realize new revenue opportunities with the insight to contain the costs of existing applications.

To Cloud or Not to Cloud? | Cloud Computing Journal.

Data Center Site Selection Based on Economic Modeling | 2013-02-05 | Mission Critical Magazine #cre #ccim #sior #datacenter

February 28, 2013 Leave a comment

mc0113-johnson-fig1-615

Data Center Site Selection Based on Economic Modeling | 2013-02-05 | Mission Critical Magazine.

Data Center Site Selection Based on Economic Modeling

The path to a high-performance data center starts at the beginning

By Debra Vieira

As the demand for data center capacity grows around the world, so too grows the need among data center owners, developers, and operators to find better ways to improve the economic performance as well as the energy efficiency of their facilities. According to a “Green Data Centers” report by Pike Research,1 the global market for green data centers segment of the industry is expected to more than double in size in the next four years. Underscoring that trend are announcements in recent months of large green data center investments by such bellwether companies as Apple, Microsoft, and IBM. As the industry has grown, so too has the concern that the data center industry may face regulatory pressure related to energy consumption. Some utilities are already expressing concerns about having enough power in the future to feed the number of data centers expected in their regions. Owners are justifiably seeking to discover new and better ways to improve their sustainable performance with reliable analytical methods and tools.

METHODOLOGY

This article describes an economic modeling approach being used to help data center owners more wisely choose locations for new data center developments and to forecast costs along with long-term economic returns on investment associated with those prospective sites.

The data center site selection process has traditionally been influenced by such qualitative factors as personal preference, economic development incentives, or economic projections. The analytical approach described in this article avoids such pitfalls by applying a more comprehensive and quantitative means of evaluating potential data center sites.

The methodology developed applies a multi-variable approach measuring the impact of site location on a data center’s cost expressed in net present value and environmental performance. It calculates the impact site choices have on a project’s schedule and ability to be sustainable, evaluates options to incorporate on-site energy production, and factors in potential public and private incentives related to sites and other key decision-making drivers. The approach also evaluates such data center sustainability site factors as carbon usage (CUE), water usage (WUE), and power usage (PUE). This approach has enabled reductions of up to 70% in some of these categories. The methodology is also designed to perform in multi-national site comparison scenarios.

Any advanced planning methodology of this kind must use validated data rather than hypothetical inclinations. Accordingly, the intent of models such as our “Opportunity Mapping” macro-analysis methodology is to replace subjective site selection decision-making with an objective data-driven process. The Opportunity Mapping methodology uses a geographic information system (GIS) approach of layering and weighing numerous site criteria with our Data Center Site Analysis Model, which applies a data-driven methodology to analyze prospective sites and forecast owner costs and economic returns on investment associated with specific prospective sites.

DATA-DRIVEN VALIDATION FOR DEVELOPMENT

Experience has shown that owners who rely on qualitative preference rather than quantitative site selection criteria run a greater risk of being disappointed in the end result. Craig Harrison, the developer of Colorado’s Niobrara Energy Park in Weld County, CO, has been a vocal proponent of using data-driven validation for green data center development decisions. He underwrote a comprehensive economic comparison of already established data center markets in 11 U.S. states. His analysis objectively compares a range of factors including economic development incentive packages, energy costs and taxes. The analytical model uses a 200,000-sq-ft data center entailing $240 million in construction along with $500 million in hardware at each compared location.

Data applied in these models include construction costs, operating labor costs, renewable energy resources, incentives and special enterprise zones, workforce availability and qualifications, taxes, utility rates, logistics and transportation, network connectivity, transportation infrastructure, environmental and regulatory restrictions, weather, geology, land and building costs, environmental factors, proximity to raw materials, and markets and special environmental considerations.

Externally derived data are used to develop site-specific conclusions related to such factors as climate, utilities, natural hazards, regulatory issues, and economic development incentives. Examples of specific sources for such data include local utilities, the National Renewable Energy Laboratory (NREL), U.S. Geological Survey (USGS), U.S. Nuclear Regulatory Commission, Federal Energy Regulatory Commission (FERC), National Oceanic and Atmospheric Administration (NOAA), Energy Information Administration, and various regional offices of economic development and trade.

Data-driven decision-making is imperative for owners seeking to reduce their facilities’ resource consumption and energy-related costs. The goal of this economic modeling methodology is to provide a clear choice of a site for data center development. An example of using this Opportunity Mapping economic method was the data-driven analysis done for Colorado’s Niobrara Energy Park (NEP).

NIOBRARA ENERGY PARK ANALYSIS

Natural disasters can be a significant threat to any data center. The NEP site evaluation included reviewing seismic activity based on data from the USGS National Seismic Hazard Map. To determine the frequency and potential impact to the site of severe wind, tornado, hail damage, and other historic weather events, data from NOAA’s Annual Severe Weather Report Summary were analyzed. Wildfire hazards as well as fuel sources (e.g., grasses) for wild fires were evaluated. Flood hazards based on 100- and 500-yr flood plains were reviewed and overlaid on the site to assist in master planning to develop locations of buildings and critical infrastructure.

Manmade disasters can also be a significant threat. Data were gathered and reviewed to understand NEP’s proximity to both active and inactive nuclear power plants and predicted fallout zones based on prevailing winds. Beyond disasters, the site was evaluated using topological maps to determine suitable building locations and flexibility for accommodation of future development, as well as natural security features. NEP was determined to be located in a low seismic hazard area, with minimal impacts by severe weather and wildfires and located outside of flood plains and manmade disaster areas.

The Opportunity Mapping methodology also evaluates site assets including electricity, network, natural gas, water, and wastewater. Site proximity to inexpensive electrical utility sources is vital for operations of any data center. It was determined that NEP was in close proximity to three large transmission-level electrical lines. The existing substation has available capacity, which can also be expanded. Additionally, opportunity exists to build a new large capacity substation that connects to the transmission lines. Locations of primary and secondary fiber networks were defined. Networks were evaluated for capacity, redundancy, and ease of network connectivity.

This evaluation determined that NEP’s location near one of the largest gas hubs in the nation provided access to some of the lowest spot gas prices in the country. Given the proximity to the hub and the resulting low fuel prices, the site lends itself to the construction of a natural gas-fired power plant to produce electricity. Water sources were evaluated for flow and quality to determine their potential for data center cooling. Wastewater discharge options are considered based on anticipated volumes and profiles for various data center cooling options. These discharge options were then reviewed against local discharge permit requirements to determine allowable disposal methods.

MODELING TOWARD EFFICIENCY

Data centers are significant consumers of energy. Efforts to reduce energy consumption by utilizing free-cooling opportunities based on local climate conditions can provide significant operational savings. A site’s climate is evaluated using historic psychometric data and comparing the results to the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Class 1 recommended or allowable data center operational criteria, to determine potential energy savings through use of economizer cooling. As an illustration of this strategy, the NEP site was determined suitable for use of a cost-effective method of using external air to assist in the cooling the data center.

An even greater savings can be realized by using renewable energy sources to offset consumption from the local utility. Weather and wind data were evaluated for the NEP site. Analysis indicated that the potential for reliable wind-electric generation was substantial and that the NEP location was ideal for this renewable resource. Solar energy opportunities were studied to determine the potential capacities of photovoltaic farms as well as their optimal locations on the property. Application of fuel cell technology is also a possibility at the NEP site due to the nearby natural gas hub and inexpensive fuel. Storage of renewable energy is a challenge; therefore, sites are also evaluated for large-scale energy storage opportunities such as compressed air and thermal storage.

Due to the significant availability of renewable resources, the NEP site is a prime location for using microgrid technologies. This technology forecasts and manages energy generation and consumption. Capacities from renewable resources, natural gas, and traditional electric utilities are forecast against the critical load. A dispatch planning program determines the most cost-effective mix of power generation constrained around maximizing renewable resources and a schedule is created for the following day. The dispatch program then executes the schedule on the following day, adjusting in real time to compensate for deviations from forecasted load or for unplanned events.

MAPPING BUSINESS INCENTIVES AND REGULATORY REQUIREMENTS

The Opportunity Mapping methodology researches business incentives from utility providers; grants from county, state and federal agencies; exemptions from sales and use taxes; and property tax abatement. Additionally, financial assistance programs, low-cost financing, development bonds, and tax credits are reviewed for applicability. All of these programs help define a community’s business environment and its willingness to accept and support new business from data center opportunities.

Some of the more obscure site assets, than the ones identified above, are proximity to a skilled workforce and transportation including highways, airports, and railroads. The methodology related to these determining factors identifies nearby concentrations of workers in cities and towns as well as the availability of education for a regional workforce. Universities, colleges, and other high-technology companies are located and evaluated against the requirements of the data center. Highway access to the site as well as other regional travel infrastructure (e.g., rail and air) are considered to indicate ease of site access as well as site proximity to local communities as criteria related to a workforce’s inclination to be employed at the facility.

Site analysis is not complete until regulatory and permitting requirements have been analyzed. Storm water and wastewater discharge as well as air emissions must be in compliance with local, state, and federal regulations. Site locations are compared to air pollution attainment and nonattainment maps to determine impacts by stringent and lengthy air permitting processes. This method reviews the requirements, financial impacts of compliance, and potential schedule impacts.

Upon the validation of site assets and associated capacities, and with a full understanding of financial impacts and regulatory requirements, master planning and data center concepts can be developed that utilize assets to the fullest extent. These concepts can be further refined to define project costs, data center IT load, projected cooling requirements, and associated PUE, WUE, and CUE metrics for which solid, data-driven comparisons and decisions can be made for site selection.

LEFDAL MINE DATA CENTER ANALYSIS

Another example of Opportunity Mapping was the data-driven analysis done for the Lefdal Mine Data Center located in Måløy, Norway. This project went through the same extensive evaluation process as NEP. The Lefdal mine is an inactive olivine mine. Olivine is a magnesium iron silicate mineral, which is typically olive-green in color and when found in gem quality is known as peridot. Industrial quality olivine is used as an industrial abrasive or as refractory sand (it resists high temperatures). The olivine was extracted using a room and pillar technique, which leaves pillars of untouched material to support the roof and the extracted areas become the rooms.

The results of the evaluation process indicated that the Lefdal mine is an excellent location for a data center complex. There is no underground gas from the ore and the mine offers natural protection from electromagnetic pulse (EMP) waves, making these large rooms an ideal location to house high-value data centers as well as supporting mechanical and electrical infrastructure. Other site assets include a large, cold fjord that can be used for cooling. The fjord is ice-free and fed by four different glaciers, thereby maintaining a constant temperature. Conceptual cooling utilizes a seawater circulation system with a heat exchanger and another water circulation system within the complex. Norway’s national fiber backbone is located near the mine and is in close proximity to wind and hydroelectric renewable power sources. Wind power from nearby wind farms has already proven itself a viable source and an early projection of hydroelectric power capacity is in the range of 6 terawatts. The Lefdal site is located near three major airports and is near a large harbor and shipping port. Road access conforms to European standards for deliveries of construction and production materials.

Understanding the Lefdal site’s assets and capacities allowed for master planning and data center concepts to be developed, which utilized the cold fjord and significant adjacent renewable energy resources. These concepts helped to define project costs, critical IT load, and cooling requirements along with other data center metrics allowing for objective site selection.

SMALLER-SCALE ASSESSMENTS

While this Opportunity Mapping methodology is excellent for large data center developments such as NEP and Lefdal, the same methodology can be used for smaller assessments. For example, evaluation of the potential for combined heat and power (CHP) of which both power and heat are produced from a single fuel source for data center power and cooling. The heat recovered from the production of power can be utilized in an absorption chiller to produce chilled water for data center cooling. Technologies used for power production can include reciprocating engines, gas turbines, microturbines, and fuel cells. By developing financial models, which incorporate location and climates, utility rates, local incentives, and projected electrical demands, impacts to capital and operating expenses can be quantified. This allows for technology comparisons as well as site comparisons using objective financial and performance metrics.

By considering the economic modeling methodology presented, data center owners can compare technologies, evaluate locations for site selections, evaluate environmental performance, estimate financial impacts and assistance programs, and wisely forecast long-term investments. Objective, quantifiable, and reliable net present value costs of a data center can be developed, thereby avoiding the pitfalls of qualitative evaluation.


WORKS CITED

1.“Green Data Centers IT Equipment, Power and Cooling Infrastructure, and Monitoring and Management Tools: Global Industry Drivers, Market Analysis and Forecasts.” Pike Research. April 23, 2012.

Categories: DataCenter

Data Centre Risk Index | #datacenter #cre #ccim #sior #realesate

February 20, 2013 Leave a comment

1310475346_Data%20Centre%20Risk%203Data Centre Risk Index launched

via Building Services Design – M&E Engineering Consultants.

International consultancies Cushman & Wakefield and hurleypalmerflatt have issued a new study evaluating the risks to global data centre facilities and international investment in business critical IT infrastructure. The firms have carried out a joint study evaluating risk in 20 leading and emerging markets and across key regional centres. The study identifies a weakness in industry decision-making processes and argues that companies should be evaluating their risk across a greater number of criteria and locations and mitigating or managing certain risks before investing in data centres.According to the Data Centre Risk Index, published today by Cushman & Wakefield and hurleypalmerflatt, the demand for data storage capacity, accelerated by recent technological advances, means that more and more companies are investing in data centres overseas and potentially increasing their exposure to risk.

Data centres support business critical IT systems and any downtime can cost millions in lost revenue and even threaten the viability of a business.  The Index highlights the impact on business continuity resulting from extreme acts of nature – as witnessed in Japan, New Zealand, Iceland, USA and Australia – and political instability following the unrest in North Africa and the Middle East.
The Index ranks countries according to the risks likely to affect the successful operation of a data centre and also identifies factors such high energy costs, poor international bandwidth and protectionist legislation as major risks (see risk categories in notes to editors).
The U.S. ranks first in the index, with the lowest risk for locating a data centre, reflecting the low cost of energy and its favourable business environment.  It is followed by Canada in second position, and Germany, in third.

Data Centre Risk Index 2011

Rank            Index Score              Country
1                      100                   United States
2                       91                    Canada
3                       86                    Germany
4                       85                    Hong Kong
5                       82                    United Kingdom
6                       81                    Sweden
7                       80                    Qatar
8                       78                    South Africa
9                       76                    France
10                     73                    Australia
11                     71                    Singapore
12                     70                    Brazil
13                     67                    Netherlands
14                     64                    Spain
15                     62                    Russia
16                     61                    Poland
17                     60                    Ireland
18                     56                    China
19                     54                    Japan
20                     51                    India

Stephen Whatling, Global Services Director at hurleypalmerflatt, said: “Despite their status as engines of global growth, China and India score poorly as a result of strict foreign ownership regulations and other barriers to investment.
“Brazil is a key emerging market, currently enjoying substantial growth and attention from foreign investors.  With improvements in international bandwidth and infrastructure and tax reforms for non-domiciled companies, Brazil could emerge as a Latin American technology powerhouse.”
Keith Inglis, Partner at Cushman & Wakefield, said: “Sweden, Qatar and South Africa are untapped markets and attractive locations, although requiring further investment in infrastructure.
“Meanwhile high corporation tax, energy and labour costs in the United Kingdom mean there is a risk that owners and operators could begin to look overseas to reduce overheads.”

Categories: DataCenter

What Type of Data Center Do You Need? |#cre #datacenter #ccim #sior

February 5, 2013 Leave a comment

What Type of Data Center Do You Need?  by Compass Datacenters.

“Modular—Composed of standardized units or sections for easy construction or flexible arrangement” —Random House American Dictionary

Not every customer has the same data center requirements; and not every “modular” offering meets the definitional standard, has the same capabilities or can provide a standalone solution. Determining the right type of data center for your business is a function of the specific problems that you need to address and the ability of your chosen data center offering to solve all of them. At the present time, data centers fall into the following four categories.

Data Center Types

Standalone Data Centers

Compass’ standalone data centers use our Truly Modular Architecture that embraces the strengths of the four competing modular data center offerings and incorporates solutions for their weaknesses into its design. As a result, a Compass data center simplifies capacity planning and puts the control of the dedicated facility into your hands.

The four (4) Truly Modular building blocks combine to enable you to locate your Uptime Institute Tier III certified, LEED Gold data center where you need it and enables you to grow your site in 1.2MW increments, on your schedule, to eliminate the need to pay for unused capacity. Everything is included; your own office spa

via What Type of Data Center Do Your Need? | Compass Datacenters.

Data Center Types

Standalone Data Centers

Standalone Data Centers - Compass Solution

Compass’ standalone data centers use our Truly Modular Architecture that
embraces the strengths of the four competing modular data center offerings and
incorporates solutions for their weaknesses into its design. As a result, a
Compass data center simplifies capacity planning and puts the control of the
dedicated facility into your hands.
The four (4) Truly Modular building
blocks combine to enable you to locate your Uptime Institute Tier III certified,
LEED Gold data center where you need it and enables you to grow your site in
1.2MW increments, on your schedule, to eliminate the need to pay for unused
capacity. Everything is included; your own office space, loading dock, storage
and staging areas, break room, and security area. In other words, it’s just like
buying a shrink wrapped data center off the shelf at the “IT store”.
Best Suited For:

  • Security conscious users
  • Users who do not like to share any mission critical components
  • Geographically diverse locations
  • Applications with 1-4MW of load and growing over time
  • Primary and DR data centers
  • Service provider data centers
  • Heterogeneous rack and load group requirements

Weaknesses:

  • Initial IT load over 4MW
  • Non-mission critical datacenter applications

Traditional

Traditional Data Center

Traditional offerings are building based solutions that use shared internal
and external backplanes and plant (for example, chilled water plant and parallel
generator plant). Traditional data centers are either built all at once, or, as
more recent builds have been done, are expanded through adding new data halls
within the building. The challenge with shared backplanes is the introduction of
risk of the entire system shutdown due to cascading failures across the
backplane (for an example of a large facility outage, see: Outages) For “phased” builds”, the key drawback to this new
approach is the use of a shared backplane. In this scenario, future “phases”
cannot be commissioned to Level 5/Integrated Systems Test (IST) since other
parts of the datacenter are already live.
Best Suited For:

  • Single Users
  • Large IT loads, 5MW+ day one load

Weaknesses:

  • Large upfront capital requirement
  • Cascading failure potential on shared backplanes
  • Cannot be Level 5 commissioned
  • Geographically tethered
  • Shared common areas with multiple companies or divisions
  • Very large facilities that are not optimized for Moves/Adds/Changes

Monolithic Modular (Data Halls)

Monolithic Modular - Data Halls

As the name would imply, Monolithic Modular data centers are large building
based solutions. Like Traditional facilities they are usually found in large
buildings and provide 5MW+ of IT power day one with the average site featuring
5MW-20MW of capacity. Monolithic Modular facilities use segmentable backplanes
to support their data halls so they do not expose customers to single points of
failure, and each data hall can be independently Level 5 commissioned prior to
customer occupancy. Often the only shared component of the mechanical and
electrical plant is the medium voltage utility gear. Because these solutions are
housed within large buildings, the customer may sacrifice a large degree of
facility control and capacity planning flexibility if the site houses multiple
customers. Additionally, security and common areas (offices, storage, staging
and the loading dock) are shared with the other occupants within the building.
The capacity planning limit is a particularly important consideration as
customers must pre-lease (and pay for) shell space within the facility to ensure
that it is available when they choose to expand.
Best Suited For:

  • Users with known, fixed IT capacity plans. For example, 4MW day one, growing
    to 7MW by year four, with fixed takedowns of 1MW per year.
  • Users requiring limited Move, Adds and Changes
  • Users that don’t mind sharing common areas
  • Users that don’t mind outsourcing security

Weaknesses:

  • Must pay for unused expansion space
  • Geographically tethered, large buildings often require large upfront
    investment
  • Outsourced security
  • Shared common areas with multiple companies or divisions (the environment is
    not dedicated to a single customer)
  • Very large facilities that are not optimized for Moves/Adds/Changes

Monolithic Modular (Pre-Fabricated)

Monolithic Modular (Pre-Fabricated)

These building-based solutions are similar to their data hall counterparts
with the exception that they are populated with the provider’s pre-fabricated
data halls. The pre-fabricated data hall necessitates having tight control over
the applications of the user. Each application set should drive the limited rack
space to its designed load limit to avoid stranding IT capacity. For example,
low load level groups go in one type of pre-fabricated data hall and high
density load groups go into another. These sites can use shared or segmented
backplane architectures to eliminate single points of failure and to enable each
unit to be Level 5 commissioned. Like other monolithic solutions, these
repositories for containerized data halls require customers to pre-lease and pay
for space in the building to ensure that it is available when needed to support
their expanded requirements.
Best Suited For:

  • Sets of applications in homogeneous load groups
  • Applications that work in a few hundred of kW
  • Batch and super computing applications
  • Users with limited Move, Add and Change requirements
  • Users that don’t mind sharing common areas

Weaknesses:

  • Outsourced security
  • Expansion space must be pre-leased
  • Shared common areas with multiple companies or divisions (the environment is
    not dedicated to a single customer)
  • Since it still requires a large building upfront, may be geographically
    tethered
  • Very large facilities that are not optimized for Moves/Adds/Changes

Containerized

Data Center Containers

Commonly referred to as “containers”, pre-fabricated data halls are
standardized units contained in ISO shipping containers that can be delivered to
a site to fill an immediate need. Although advertised as quick to deliver,
customers are often required to provide the elements of the shared outside plant
including generators, switch gear and sometimes, chilled water. These backplane
elements, if not in place, can take upwards of 8 months to implement, often
negating the benefit of speed of implementation. As long-term solutions,
pre-fabricated containers may be hindered by their non-hardened designs that
make them susceptible to environmental factors like wind, rust and water
penetration and their space constraints that limit the amount of IT gear that
can be installed inside them. Additionally, they do not include support space
like a loading dock, a storage/staging area, or security stations thereby making
the customer responsible for their provision.
Best Suited For:

  • Temporary data center requirements
  • Applications that work in a few hundred of kW load groups
  • Batch processing or supercomputing applications
  • Remote, harsh locations (such as military locales)
  • Limited Move/Add/Change requirements
  • Homogeneous rack requirements

Weaknesses:

  • Lack of security
  • Non-hardened design
  • Limited space
  • Cascading failure potential
  • Cannot be Level 5 commissioned when expanded
  • Cannot support heterogeneous rack requirements
  • No support space

Categories: DataCenter

Cloud users bring data back in house | #datacenter #cre #ccim #sior

February 4, 2013 Leave a comment

Cloud users bring data back in house

Cloud users bring data back in house -by Penny Jones – DatacenterDynamics

 

Early adopters tighten the boundaries of cloud, according to new research and cloud providers working also in the colo space
Companies that shifted towards the Cloud before 2012 are starting to move towards tighter control of their data again, be it through the establishment of private clouds or bringing IT back in house according to Oracle.

Oracle, which  predicted the move in its January 2012 Next Generation Data Center Index results, this month said the 2013 results collected by analyst firm Quocirca confirmed the trend, with 66% of companies – up from 45% – now using only inhouse data facilities.

Quocirca analyst Clive Longbottom said companies are still consolidating, with the number of companies with a single inhouse data center rising from 26% to 41% between the cycles, but at the same time the number of respondents using multiple inhouse operations also rose by 6% to 25%.

“The latest two cycles of the research have caught organisations between two points,” Longbottom said.

“Cycle II was carried out just as organisations were looking to cloud computing with many projects being carried out at a time when cloud is becoming more mainstream.

“Cycle III has been carried out at a time when cloud is becoming more mainstream, and many pilots are now becoming full run-time projects.”

Colocation and cloud providers Telehouse and Navisite said they have both witnessed similar trends, but can’t attribute the shift solely to the maturing of the Cloud.

Lukasz Olszewski, senior systems architect at Telehouse and David Grimes, CTO at Navisite, both say they believe companies “jumped” into the cloud without carefully thinking through their cloud strategy in the first place.

“The first wave of cloud adoption was not that well thought through,” Olszewski said.

“It was more about moving to the Cloud without choosing the cloud provider and a lot of companies ended up using SaaS (Software-as-a-Service) services based in some locations all over the world, only to discover, for example that they had huge latency problems and other issues. Now they are moving back.”

Grimes said he believes a lot of companies thought it was necessary to move to the Cloud “because of the market buzz”.

“Those who have jumped back are generally those that didn’t have a good reason to move to the Cloud in the first place,” Grimes said. “They were almost on the bandwagon in that sense.”

“The ones that are staying are the companies that took a more methodological and gradual approach to adopting cloud – some may have been using existing managed services first and didn’t find cloud all that different, except it was faster. Not necessarily cheaper but more flexible.”

“I do think, however, those that have bounced back will start, in one or two years, re-approaching the Cloud and probably in a hybrid capacity because few companies really invest in enterprise data centers properly to build out what they need to operate their business.”

Oracle engineered systems product leader John Abel has another take on the bounce effect. He said he believes it is partly due to the simplification of the IT stack and the shift towards more standardised enterprise IT, which is allowing more agility around the Cloud.

“The simplification of these strategies by vendors have provided more flexibility and agility – customers now want to blend in private and public clouds,” Abel said.

“Companies want to be able to have  private cloud but have the freedom to move to a new hosting option that may be more efficient in the next six months.

“Many customers I speak with now move to a platform of choice, they understand when to use private, public or hybrid clouds.”

Either way, Abel, Grimes and Olszewski say the bounce is no indication that cloud will falter in future.

“We still see customers are asking for cloud more and more,” Olszewski said.

“And customers aren’t looking at cloud as a commodity any more – they are looking beyond using cloud as a test and development platform, looking at it instead from the whole stack.”

Luigi Freguia, SVP of Oracle Systems EMEA said the “data deluge” will also push companies that may not have considered cloud before to start looking at external options.”

Cloud users bring data back in house | Datacenter Dynamics.

Categories: DataCenter
%d bloggers like this: