Archive for December, 2012

Why the green building industry needs to pay attention to tenants | |#cre #ccim #sior

December 30, 2012 Leave a comment


Why the green building industry needs to pay attention to tenants

Why the green building industry needs to pay attention to tenants

The most important year in a building’s life is the first year of occupancy. This is when the construction team has pulled their trailers, completed their punch lists, claimed victory and moved on to the next project. This is also when the operations team starts to experience the gap between design intent and actual performance.

Experience suggests it takes years to “tune” a building’s systems and operating procedures to meet theoretical performance expectations, if it ever does.Instrumentation and continuous monitoring capabilities are critical to being able to give the operations team the types of detailed system-level information necessary to identify installation problems, design limitations and to optimize building performance.

Most of the attention in the green building industry, and all of the controversy, is focused on building design and new construction. New construction was an obvious area for the emerging green building industry to take shape a decade ago, driven by the desire of building owners, architects, engineers and contractors to work together to design and construct more sustainable buildings.

Ten years later, the industry needs to shift its attention to demonstrating performance, not just predicting it — greening tenant spaces, not just base buildings — and improving existing buildings, not just new ones.This suggests taking a life cycle approach to designing, delivering, operating and improving high performance buildings and tenant spaces.

Delivering high performance tenant spaces

More than half the energy use in commercial buildings is in tenant spaces. Integrated design has been one of the most valuable changes driven by the green building industry. Most would agree that the cost-effective delivery of a high-performance building is highly dependent on the successful collaboration and coordination of the construction team, design professionals and system suppliers.

This process, which works so well in core building projects, should also be applied to tenant build-outs as well.There are lots of opportunities to improve energy and water efficiency by 20 to 40 percent in tenant spaces through integrated design with paybacks less than the lease term. In a recent blog post, I described a demonstration project that will provide integrated design guidelines and quantify the benefits of high-performance tenant build-outs.

Operating for high performance

After buildings have been successfully commissioned and verified to be achieving their designed performance, they need to be operated in a manner that maintains that performance over time. Building re-commissioning is the practice of retuning a building every 5 to 7 years to bring its performance back in line with its design intent and technical capability. This is a labor-intensive process that can be enhanced, or potentially eliminated, through the use of information technology and remote monitoring.

A well-instrumented building, and other buildings with interval utility meters, can be monitored using advanced analytical software to track energy performance and remotely detect faults in systems and equipment.This early warning system allows building operators to continue to perform their daily activities knowing that someone is watching their back and alerting them to conditions that waste energy, compromise occupant comfort or threaten equipment reliability.

Improving performance

Perhaps the most important trend in building performance is the increasing focus on systematic approaches to continuous improvement. Industry initiatives, such as the Better Buildings Challenge, and international standards such as ISO-50001 Energy Management Systems are putting renewed emphasis on improving the efficiency of existing buildings, particularly across portfolios of buildings.

A systematic approach to improving building performance involves setting goals, establishing policies, defining metrics, developing action plans, tracking performance and reporting progress. As we have learned in recent projects, the timing of improvement actions — from retro-commissioning to opportunistic equipment upgrades to major retrofits — are critical to achieving significant efficiency improvements with the best financial returns.

Taking the long view, and systematically planning and delivering performance across the entire facility life cycle, is critical to meeting sustainability objectives and helping buildings achieve their full economic and environmental potential.

Photograph of eco-friendly Kungsbron hotel in Stockholm provided by Nadezhda1906 via


via Why the green building industry needs to pay attention to tenants |

Categories: Sustainability

33 Harvard Business Review Blog Posts You Should Read Before 2013 #cre

December 30, 2012 Leave a comment

33 HBR Blog Posts You Should Read Before 2013 – Katherine Bell – Our Editors – Harvard Business Review.


HBR’s editors have compiled a list of some of our — and your — favorite of the nearly 2000 blog posts we published in 2012.

As usual, the topics that most preoccupied our authors and readers reflected our shared anxieties: the pressures exerted on our businesses by technology and the global economy — no end to economic uncertainty, the need to make sense of vast amounts of data, the problem and opportunity of disruptive innovation; as well as perennial personal worries — finding work that matters, never getting enough done.

We hope you’ll find some insights here you may have missed the first time around, and that they’ll help you make 2013 a productive and innovative year for your company and yourself. ( click link for full story )

Categories: News

Baby, You’re a Rich Man – #cre

December 30, 2012 Leave a comment



Baby, You’re a Rich Man


You don’t have to be Warren Buffett to be considered wealthy. It all depends on who is setting the bar. Here’s why it matters for your taxes, investment options and college

Categories: News

The Worst CEOs of 2012 – Businessweek #cre

December 29, 2012 Leave a comment
Categories: News

Labor Market, Housing in U.S. Strengthen Into 2013: Bloomberg #cre #realestate

December 29, 2012 Leave a comment

Will we still love the data centre seven years from now? | #cre #ccim #sior #datacenter

December 28, 2012 Leave a comment

Will we still love the data centre seven years from now?

By Tony Lock

It seems there has never been a clearer understanding of how rapidly business is changing and IT technologies are evolving.

With this in mind, we recently ran an online survey to ask readers of The Register how they thought data centres would develop between now and 2020. This is long enough for significant things to happen, but not so long as to take us into the realm of science fiction.

We gathered responses from both mainstream enterprises and representatives of IT suppliers. As supplier views were so different (generally a lot more optimistic), we have set them aside and will come back to them in a later article.

Even the enterprise and SME views are likely to reflect a positive view of the world as it is those with enthusiasm for the topic who are more likely to have responded.

With that health warning in mind, let’s take a look at the results of the survey.

When looking to the future, we asked Reg readers: “Putting all of the constraints and the current state of the industry to one side for a minute, how desirable would the following be as part of your perfect IT vision?”

The responses are analyzed below

Will we still love the data centre seven years from now? • The Register.

Cloudy or fine?

Cloud solutions, be they private, public or hybrid, have received considerable coverage over the past few years. Some pundits and advisors have even talked about a wholesale move to cloud computing of one form or another, and the consequent disappearance of traditional systems. Most respondents don’t accept that a huge shift is on the cards, however, particularly in relation to public cloud (figure 1).

Figure 1

Although we haven’t shown it explicitly here, behind this chart we see that it is the smallest organisations, those with fewer than 10 staff, that are the least attracted to a world dominated by public and private cloud, perhaps assuming that such offerings are not designed to meet their needs. This is ironic, given how often we hear about the the potential benefits of cloud solutions for smaller companies.

When you break the cloud idea down into specific capabilities, however, many more see the relevance. Let’s look at this in more detail.

Do the automation

Reg readers’ perfect vision of operations and management reveals a large degree of uniformity in responses across a range of areas (figure 2).

Figure 2

A clear majority of respondents see a lot of value in solutions capable of simplifying and automating key administrative processes. Perhaps the most surprising element on the chart is that about one in six still perceive such capabilities as being not interesting or relevant at all.

There is a particular lack of interest in hybrid cloud management and sourcing. The relatively limited interest in public cloud (at least within this sample) partially explains this finding, but there is also the fact that some respondents see a clear demarcation between hosted and on-premise environments. This group may take advantage of different cloud services but they want to keep them separate.

The next chart (figure 3) shows that the concept of different IT specialists working together as fully integrated teams is seen as desirable by a substantial majority. Acceptance patterns are similar across organisations of all sizes.

Figure 3

The fact that three in five respondents see IT activities and investments revolving around the concept of business services rather than systems may also mark an important milestone in the views of the IT professionals who make up the vast majority of our survey sample.

The widespread acceptance reported here is unusual and it will be interesting to see how quickly these ideas transfer into adoption.

Getting down to business

It is also notable that almost two-thirds of respondents would value the ability to report IT costs accurately based on activity or resource consumption. So far, chargeback has rarely been used in organisations except in very simplistic models.

Taken together these results are in line with the ideals that Freeform Dynamics has been discussing as important to the future of IT service delivery.

Getting IT teams working effectively to supply business users with the applications, data and services they require is central to the success of every organisation. It helps IT not only to respond quickly to change requests from users, but also to do so securely and cost effectively.

Introducing greater transparency to IT costing will also help

When the concept of business services actually becomes a natural part of day-to-day working practices, the IT department will be able to be far more proactive in putting forward the new opportunities that technology makes available.

But its advice and suggestions will need to be boosted by the trust business users place in IT, something which improvements in responsiveness and service levels are likely to bring. Introducing greater transparency to IT costing via activity or consumption charging will also help.

The final chart on visions of the future looks at how systems should be architected (figure 4).

Figure 4

The chart highlights how widely the idea that building applications using component-type architectures is seen as desirable. Just a few years ago service-oriented architecture met considerable resistance from the IT community but its value now appears to be recognised.

Such acceptance may grow as organisations look to IT to support new business processes and as modern coding tools and techniques are adopted.

Perfect vision

Those surveyed were asked to predict how long it would take for their perfect vision of IT to become a reality, if ever.

Looking at operations and management (figure 5), we see there is a consensus that the data centre of 2020 will take advantage of solutions that automate the provision of services. Equally, there will be widespread use of tools that automatically optimise resource usage in step with fluctuating demands.

The relatively high numbers of respondents with such solutions already in place or with projects underway may reflect the early adoption of sophisticated management tools as organisations move beyond the first wave of server virtualisation.

Figure 5

When it comes to combining internal resources with provider and hosted services, doubt is expressed about whether it will be easy to move workloads back and forth between private and public clouds by 2020. These results are in line with the lack of interest in such capabilities shown in figure 2.

On the topic of organisation and services, figure 6 again shows that early adopters are well represented in the survey sample. But it is still clear that a majority of organisations expect by 2020 to be running their data centre on the basis of service delivery rather than systems management, with the reporting of resource consumption and chargeback modelling adopted much more widely than today.

Figure 6

Conversely, the survey indicates much less enthusiasm for self-provision of IT services to end-users. This may reflect the fear of many IT professionals that end-users will be tempted to install every application within reach with little thought for the financial consequences.

It will be interesting to see how this changes if chargeback reporting becomes widely deployed.

Finally, returning to the architectural piece and picking up on the last item in figure 7, it is worth recognising that a large number of organisations doubt that they will ever run their data centres along the same lines as the global service providers.

[Figure 7]

This makes perfect sense, given that enterprise data centres must support a much wider variety of workloads and platforms, including 30 years of legacy that is not likely to be cleared up in its entirety over the next seven years. The data centres of global service providers in contrast are designed specifically for the applications and services they provide.

The same but different

The data centre of 2020 may well superficially resemble tho data centre of today but under the skin significant changes will be visible. Of these, the adoption of a service-centric mentality in how IT is run is arguably the most important.

It is also likely that chargeback reporting will become far more widespread. Focusing on business service delivery management rather than infrastructure administration will enhance IT professionals’ ability to help the organisation maximise new opportunities.

There are many ways in which IT can be more responsive

This methodology, built on top of rapid advances in management tools that automate many routine operations, also has the potential to bridge the perceived lack of alignment between IT and business users. Although this gap is nothing like the chasm people often claim, there are many ways in which IT can be more responsive and can be seen as an enabler rather than an inhibitor of business performance.

But with change being the only constant that is likely to be as important in the data centre of 2020 as in today’s, the question arises of whether there are issues unrelated to technology that could limit the impact of the advances coming down the line.

However rapidly the data centre of 2020 can deploy new services, they will be able to operate only at the speed of the change management processes that organisations have wrapped around IT systems.

With the entire IT infrastructure highly responsive and administered by closely integrated teams, the operational processes surrounding IT will need to move just as fast. That is not always the case today and it is often factors outside the direct control of IT professionals that slow things down.

This highlights the risks of unilateral planning and the need for IT and business people to work together on a vision for the data centre that works for everyone. ®

  • Tony Lock is programme director at Freeform Dynamics

By Tony LockGet more from this author

Posted in Data Centre, 20th December 2012 13:25 GMT

Categories: DataCenter

Explain: Tier 1 / Tier 2 / Tier 3 / Tier 4 #DataCenter | #cre #ccim #sior

December 28, 2012 Leave a comment


Explain: Tier 1 / Tier 2 / Tier 3 / Tier 4 Data Center


Q. What is data center tiers? What is tier 1 data center? Which tier / level is the best for maximum uptime?

A. Tier 1 to 4 data center is nothing but a standardized methodology used to define uptime of data center. This is useful for measuring:

a) Data center performance
b) Investment
c) ROI (return on investment)

Tier 4 data center considered as most robust and less prone to failures. Tier 4 is designed to host mission critical servers and computer systems, with fully redundant subsystems (cooling, power, network links, storage etc) and compartmentalized security zones controlled by biometric access controls methods. Naturally, the simplest is a Tier 1 data center used by small business or shops.

  • Tier 1 = Non-redundant capacity components (single uplink and servers).
  • Tier 2 = Tier 1 + Redundant capacity components.
  • Tier 3 = Tier 1 + Tier 2 + Dual-powered equipments and multiple uplinks.
  • Tier 4 = Tier 1 + Tier 2 + Tier 3 + all components are fully fault-tolerant including uplinks, storage, chillers, HVAC systems, servers etc. Everything is dual-powered.

    Data Center Availability According To Tiers

    The levels also describes the availability of data from the hardware at a location as follows:

    • Tier 1: Guaranteeing 99.671% availability.
    • Tier 2: Guaranteeing 99.741% availability.
    • Tier 3: Guaranteeing 99.982% availability.
    • Tier 4: Guaranteeing 99.995% availability

via Explain: Tier 1 / Tier 2 / Tier 3 / Tier 4 Data Center.

Categories: DataCenter
%d bloggers like this: