Archive

Archive for April, 2013

Industrial Sector Demand Rising | #CRE #CCIM #SIOR

April 19, 2013 Leave a comment

Industrial Sector Demand Rising.

The National Association of Industrial and Office Parks is reporting an increase in demand for industrial space in the U.S. as the global economy continues to improve and officials believe the country will see as much as 425 million square feet of absorption by the end of next year. A lack of industrial development and an abandonment of the “just in time” supply-chain model is driving interest into the sector whereas not too long ago yield in real rents could not justify the cost of construction. That’s why some experts argue that there will be a return of interest in older Class-B buildings that offer the same utility without the amenities of Class-A properties. For more on this continue reading the following article from National Real Estate Investor

Over the past year we have seen continued improvement in economic activity across the world, creating increased demand for industrial space in the U.S. The National Association of Industrial and Office Parks (NAIOP) projects that, following a solid 100 million sq. ft. of absorption in 2012, the US will experience approximately 425 million sq. ft. of positive absorption between 2012 and 2014.

This projection of increasing demand reflects a combination of expected continued economic growth and positive fundamentals driving the U.S. industrial sector. Lower gas and electric costs in the U.S. compared to developing economies is leading to manufacturing being “on-shored.” An uptick in e-commerce is increasing the amount of goods stored in warehouses as opposed to bricks-and-mortar retail stores. The recent devastating Japanese tsunami and Thai floods highlighted the dangers of thinly stretched supply chains – entire plants had to be shut down because of inadequate inventory of a small, single out-sourced part. As a result, there appears to be some reduced adherence to strict “just in time” supply chain management, where companies keep only enough inventory on hand to meet immediate needs.

According to CBRE[1], due to increased demand, the national availability rate for industrial real estate in the US declined to13.1 percent in the third quarter of 2012. CBRE projects availability to fall to 12.2 percent by year-end 2013 and 11.3 percent by year-end 2014, as net absorption continues to outpace new development. This poses the question – why isn’t there more new development to satisfy this increasing demand?

A recent industry report[2]that analyzes industrial construction trends suggests this restricted supply stems primarily from a lack of recovery in real rents to levels that would provide the required returns to justify development. However, to fully explain the lack of industrial development it is important to examine how industry dynamics have created a disparity between the type of buildings supplied and what the “average” industrial tenant demands.

Most industrial real estate developers tend to build class-A buildings with 32 ft. to 36 ft. clear-height, cross-docked loading and state-of-the-art fire suppression sprinklers and lighting. While these are aesthetically pleasing and functionally efficient, their premium building costs require premium rents to justify their development. However, tenants that actually need the “bells and whistles” associated with class-A buildings may only be a small portion of total tenant demand.

According to a recent white paper[3]that examines the availability rates of industrial properties, it appears the majority of industrial tenants may be generally content with older class-B buildings. These buildings may have less overall functionality than class-A buildings, but are priced for the industrial “utility” they deliver. The high tenant retention rates experienced by these buildings  – especially if they are single tenant occupied – is evidence that their level of functionality (industrial tenant “utility”) meets tenant demands.

The paper also examined the common belief that industrial real estate is particularly susceptible to obsolescence – that, due to outdated design, buildings lose their functionality for industrial tenants. In contradiction to this belief, the author discovered that older industrial buildings generally boast higher occupancy rates than newer buildings. Specifically by decade of construction, buildings built in the 1950s had higher rates of occupancy than those built in the 60s, and so on, with this trend continuing to buildings built in 2000 and beyond. These statistics seem to refute the common misperception of diminished utility and desirability of older class-B industrial buildings.

Overall, the aforementioned factors impacting the supply and demand dynamic create a compelling case for the attractiveness of class-B industrial buildings among a variety of constituencies. Class-B industrial property owners and operators benefit from low tenant rollover because of the buildings’ infill locations and sufficient functionality. Tenants benefit from low rentals rates, while investors can expect stable and predictable cash flows and lower volatility. With somewhat less functionality than their class-A counterparts but sufficient utility for their occupants, it can be argued that class-B buildings deliver better “bang for the buck” for tenants. In short, today and for the foreseeable future, class-B properties will continue to provide an important supply component in meeting the demand for industrial space as the new industrial revolution gains momentum.

Benjamin S. Butcher is, CEO of STAG Industrial (NYSE: STAG).

Written by:

Benjamin S. Butcher
Advertisements

CIBC | Strong year for Canadian property market | #CRE #CCIM #MONTREAL #CANADA

April 12, 2013 Leave a comment

Canada letters gifCanadian commercial property market heading into another strong year: CIBC

REIT returns to remain attractive, fuelled by low rates and solid fundamentals

TORONTO, April 10, 2013 /CNW/ – Canada’s commercial real estate sector and REIT investment market appear set to outperform for a fifth-straight year, according to CIBC World Markets Inc.

“All of the fundamentals seem to be supporting [the] continuation of [an] extended recovery” from the market lows of 2008, says Allan Kimberley, Vice-Chairman, Real Estate Investment Banking at CIBC.

In a series of notes released today at the bank’s 18th annual real estate conference in Toronto, CIBC says low interest rates, the continued availability of equity and debt, and healthy supply-demand fundamentals have set up Canada’s real estate capital markets for another strong year. These conditions are relatively unchanged from 2012 which saw “record levels of new issuance, total returns exceeding those of the broader S&P/TSX Composite index, a growing list of IPO and M&A activity, against a backdrop of declining volatility,” says Mr. Kimberley.

Alex Avery, a CIBC Equity Analyst who covers the commercial real estate sector, also sees favourable property and REIT market conditions continuing in 2013, with one caveat.  “While current real estate and REIT investment market conditions remain highly attractive in many respects, property and REIT pricing have risen largely to reflect the favourable current environment.  We expect attractive returns from Canadian REITs in 2013, but more modest than seen in recent years.”

Mr. Avery says returns from REITs in 2013 will be driven by attractive distribution yields and modest further appreciation in unit prices. Over the next 12-18 months he’s forecasting returns to “average 5-10 per cent, comprising close to 6 per cent in average yield and 0-5 per cent in capital appreciation.”  REITs most likely to outperform will be ones that deliver the highest funds from operations (FFO) growth, he says.

“With more than a dozen new REIT formations during 2012, and the potential for as many in 2013, the Canadian REIT universe is expanding rapidly to offer investors numerous new alternatives,” he adds.  “We believe these new entrants offer the greatest opportunity for investors to outperform the broader REIT group, with smaller, growth-oriented REITs offering significantly higher FFO growth potential than the larger capitalization, more established REITs. However, these new entrants also tend to lack liquidity and a public track record of financial results and/or of management ability to execute strategy.”

Two factors that can spoil attractive property fundamentals – the cost and availability of debt and supply of new developments – remain muted and will likely remain so this year, according to Mr. Avery.

“Wide spreads and forecasts for higher, but still low benchmark interest rates suggest favourable borrowing conditions could continue. Committed and proposed development activity currently remains measured in the context of the overall inventory of investment property in Canada, notwithstanding development proposals having picked up sharply in recent months.”

In a separate note, Avery Shenfeld, Chief Economist at CIBC, says the real estate market will be supported by “national vacancy rates for both office and industrial space [which] are likely to remain well-contained” while “retail properties will continue to benefit from new entrants from the U.S.”

Meanwhile, the combination of historically low interest rates, accessible credit markets, a high-yield market that continues to expand and healthy corporate fundamentals should support M&A activity. “We expect M&A activity could continue in 2013, with privatizations among the higher-quality and larger capitalization REITs, and mergers between smaller capitalization REITs,” says Mr. Avery.

In 2012, real estate was the third most active sector in Canadian M&A, behind oil and gas, and diversifieds. CIBC | Canadian commercial property market heading into another strong year: CIBC.

Natural gas plants cool data centres as a by-product in an audacious winner for Uptime’s Green IT #CRE

April 11, 2013 Leave a comment

 

Liquid Gas Plants Could Power Data Centres For Nothing

Natural gas plants could power data centres for nothing

An ingenious proposal to locate data centres near liquid natural gas (LNG) plants could provide all their cooling and electrical power for nothing – and the group behind it hopes to interest European providers in the concept.

Natural gas storage plants produce excess refrigeration, and waste enough energy to run a data centre according to TeraCool. The plan is being looked at by LNG plant owners in several countries, and has won the Audacious Idea prize in the 2013 Green Enterprise IT Awards 2013 from the Uptime Institute

 

Free energy from gas plants

Natural gas is typically liquefied at the places where it is extracted from the ground and transported in liquid form in tankers to LNG plants where it is stored in giant cryogenic tanks. When needed, it is turned back into a gas (vaporised) for circulation in the gas supply network.

Liquid gas stores energy, and the vaporisation process releases that energy, while also producing very low temperatures. However, this energy and cooling normally go to waste, because LNG plants are situated away from population centres which could use them.

“We think there is a tremendous opportunity,” Bob Shatten, president of TeraCool, told TechWeekEurope. “We have had interest from some LN G terminals – now we need to get the data centre world to step outside of the box and align their interests at one of these locations.”

TeraCool proposes that data centres could be built near the LNG terminals – as long as there are good data connections, and improve the energy efficiency of both sites. The waste heat from the data centre’s servers could help vaporise the gas, and the energy released could power the data centre.

“The system adds an additional refrigeration loop to the circuit in which the refrigerant is pressurized, warmed and vaporised,” explains the Institute’s citation. “The expanding refrigerant drives a turbine coupled to a generator to produce electricity in a combustion-free, emissions-free process.”

LNG plants are often huge, and some could easily provide enough power for the largest data centres ever built, said Shatten. “The upper bound for data centres is around 90MW in the US – and one terminal we looked at in Seoul, Korea, has 22 storage tanks, and could provide 350MW of cooling and 87MW of electricity.

Since a 90MW data centre, operating at a PUE efficiency figure of 1.3, would only need around 27MW of cooling, that’s easily enough.

The limitations of the idea are the variation in the amount of gas being vaporised. The data centre would only be able to rely on the steady output of the plant (the “minimum gas sendout”). However, as Shatten points out, gas is a transition fuel from coal, and is increasingly used in generators which power the base load of countries’ national grids.

European data centres are migrating northwards to some extent at the moment, to countries like those in the Nordic region, where cooling can be had for nothing from the surrounding air. Co-location with LNG plants could be particularly useful in hot countries in Southern Europe, like Spain and Portugal, where cooling is harder to come by.

“Once this happens, it will happen fast.” said Shatten. “The data centre has very little to lose by trying this – it can get its money back very quickly on the energy savings. ”

Uptime Top Ranking

The other Green Enterprise IT awards went to actual data centers – but most featured liquid cooling. The University of Leeds was recognised for using British company Iceotope’s liquid cooling system for the servers in its its high performance computing (HPC) system.

Interxion won a “retrofit” award for a system that uses sea water to cool multiple data centres in Stockholm, and then uses the warm water to heat local offices before returning it to the sea. The firm says it reduced its energy needs by 80 percent and got its PUE down to 1.09.

Other winners include the US National Center For Atmospheric Research, which achieved a PUE of 1.08 in a new high-performance computing (HPC) facility, and a design innovation award went to TD Bank, which included rainwater harvesting and onsite generation in a data centre.

 

Gas-Cooled Data Centre Idea Wins Green Prize.

First-Quarter 2013 Office Fundamentals Improve | #CCIM Institute | # CRE #SIOR

April 11, 2013 Leave a comment

vlad-ghiea-montreal-skyline-at-night-quebec-canada

First-Quarter 2013 Office Fundamentals Improve | CCIM

An additional 3 million square feet of office space was absorbed in 1Q13, while new office construction increased by 7.1 msf, to 48.9 msf, during the same period, according to a Cassidy Turley report.

However, growth in the office sector has begun to slow down overall, the report notes. “Market fundamentals continue to improve, but at the same time, the office sector is clearly going through a transformation,” says Kevin Thorpe, chief economist at Cassidy Turley. “Many businesses are reassessing space needs and recognizing they can function perfectly well with a smaller, more efficient footprint.”

At 15.4 percent, the stagnant office vacancy rate remains 200 basis points above prerecession levels. Average asking rents also stalled at $21.63 in 1Q13.

Top 10 Office Markets, 1Q13

based on YOY % rent growth

1. New York (11.0%)

2. Salt Lake City (10.9%)

3. San Jose/Silicon Valley, Calif. (10.3%)

4. Austin, Texas (6.6%)

5. Denver (6.2%)

6. Houston (5.6%)

7. Dallas (4.7%)

8. Nashville, Tenn. (4.3%)

9. New Haven, Conn. (4.3%)

10. San Mateo, Calif. (3.4%)

First Quarter Office Sector Fundamentals Improving, Demand Remains Subpar

Published on  April 02, 2013


WASHINGTON, DC – Demand for office space in the first quarter of 2013 flattened as businesses continue to push for space efficiency, according to research released today by Cassidy Turley, a leading commercial real estate services provider in the U.S.

U.S. office markets absorbed 3 million square feet (msf) of office space in the first quarter, down from 23 msf in the fourth quarter. Although this marks the third straight year of consistent net growth in the office sector, the first quarter demand figures were the weakest since the recovery began in 2010. Vacancy rates in the first quarter remained flat at 15.4% – still 200 bps higher than pre-recession levels.

“Market fundamentals continue to improve, but at the same time, the office sector is clearly going through a transformation,” said Kevin Thorpe, Chief Economist at Cassidy Turley. “Many businesses are reassessing space needs and recognizing they can function perfectly well with a smaller, more efficient footprint.  As a result, job growth is not giving us the same pop in demand that we have grown accustomed to.”

Average asking rents in the first quarter of 2013 registered at $21.63, unchanged from the same period a year ago. New office construction increased from 41.8 msf in the fourth quarter to 48.9 msf in the first quarter of 2013.

“The development pipeline remains lean,” Mr. Thorpe said.  “Even with a slight pickup this quarter, new supply coming to the market is still 30% below the norm. The supply constraints are critically important for restoring balance to the office sector.”

The top 10 strongest markets in terms of demand for office space were Dallas, with 728,000 sf of net absorption; Tampa, with 613,000 sf; Boston, with 610,000 sf; Denver, with 576,000 sf; Minneapolis, with 491,000 sf; Northern New Jersey, with 434,000 sf; Seattle, with 396,000 sf; Charlotte, with 328,000 sf; Raleigh-Durham, with 325,000 sf; and Suburban Maryland, with 299,000 sf.

The top 10 strongest markets in terms of rent growth were New York, with 11% year-over-year rental appreciation; Salt Lake City, with 10.9%; San Jose/Silicon Valley, with 10.3%; Austin, TX, with 6.6%; Denver, with 6.2%; Houston, with 5.6%; Dallas, with 4.7%; Nashville, with 4.3%; New Haven, CT, with 4.3%; and San Mateo, CA, with 3.4% rent growth.

 

 

 

Why Consider a Modular Data Center? #DataCenter #cre #ccim

April 10, 2013 Leave a comment

data_center_interior_lit1_large

via Why Consider a Modular Data Center? » Data Center Knowledge.

By: Bill Kleyman

This is the thirds article in the Data Center Knowledge Guide to Modular Data Centers series. The initial black eye for containers and the modular concept was mobility. The Sun Blackbox was seen on oil rigs, war zones and places a data center is typically not found. As an industry of large brick and mortar facilities that went to all extremes to protect the IT within, the notion of this data center in a box being mobile was not only unattractive, but laughable as a viable solution. What it did do however, was start a conversation around how the very idea of a data center could benefit from a new level of standardizing components and delivering IT in a modular fashion around innovative ideas.

Faced with economic down-turn and credit crunches, business took to modular approaches as a way to get funding approved in smaller amounts and mitigate the implied risk of building a data center. Two of the biggest reasons typically listed for the problem with data centers are capital and speed of deployment. The traditional brick and mortar data center takes a lot of money and time to build. Furthermore, the quick evolution of supporting technologies further entices organizations to work with fast and scalable modular designs. Outside of those two primary drivers there are many benefits and reasons listed for why a modular data center approach is selected.

Design

• Speed of Deployment: Modular solutions have incredibly quick timeframes from order to deploy¬ment. As a standardized solution it is manufactured and able to be ordered, customized and delivered to the data center site in a matter of months (or less). Having a module manufactured also means that the site construction can progress in parallel, instead of a linear, dependent transition. Remem¬ber, this isn’t a container — rather a customizable solution capable of quickly being deployed within an environment.

• Scalability: With a repeatable, standardized design, it is easy to match demand and scale infrastructure quickly. The only limitations on scale for a modular data center are the supporting infrastructure at the data center site and available land. Another characteristic of scalability is the flexibility it grants by having modules that can be easily replaced when obsolete or if updated technology is needed. This means organizations can forecast technological changes very few months in advance. So, a cloud data center solution doesn’t have to take years to plan out.

• Agility: Being able to quickly build a data center environment doesn’t only revolve around the abil¬ity to scale. Being agile with data center platforms means being able to quickly meet the needs of an evolving business. Whether that means providing a new service or reducing downtime — modular data centers are directly designed around business and infrastructure agility. Where some organizations build their modular environment for the purposes of capacity planning; other organizations leverage modular data centers for their highly effecitve disaster recovery operations.

• Mobility and Placement: A modular data center can be delivered where ever it is desired by the end user. A container can claim ultimate mobility, as an ISO approved method for international transporta¬tion. A modular solution is mobile in the sense that it can be transported in pieces and re-assembled quickly on-site. Mobility is an attractive feature for those looking at modular for disaster recovery, as it can be deployed to the recovery site and be up and running quickly. As data center providers look to take on new offerings, they will be tasked with stay¬ing as agile as possible. This may very well mean adding additional modular data centers to help support growing capacity needs.

• Density and PUE: Density in a traditional data center is typically 100 watts per square foot. In a modular solution the space is used very efficiently and features densities as much as 20 kilowatts per cabinet. The PUE can be determined at commissioning and because the module is pre-engineered and standardized the PUE’s can be as low as 1.1–1.4. The PUE metric has also become a great gauge of data center green efficiency. Look for a provider that strives to break the 1.25 –1.3 barrier or at least one that’s in the +/- 1.2 range.

• Efficiency: The fact that modules are engineered products means that internal subsystems are tightly integrated which results in efficiency gains in power and cooling in the module. First generation and pure IT modules will most likely not have efficiency gains other than those enjoyed from a similar con¬tainment solution inside of a traditional data center. Having a modular power plant in close proximity to the IT servers will save money in costly distribution gear and power loss from being so close. There are opportunities to use energy management platforms within modules as well, with all subsystems being engineered as a whole.

• Disaster Recovery: Part of the reason to design a modular data center is for resiliency. A recent Market Insights Report 2 conducted by Data Center Knowledge points to the fact that almost 50% of the surveyed organizations are looking at disaster recov¬ery solutions as part of their purchasing plans over the next 12 months. This means creating a modular design makes sense. Quickly built and deployed, the modular data center can be built as a means for direct disaster recovery. For those organizations that have to keep maximum amounts of uptime, a modular architecture may be the right solution.

• Commissioning: As an engineered, standardized solution, the data center module can be commis¬sioned where it is built and require fewer steps to be performed once placed at the data center site.

• Real Estate: Modules allow operators to build out in increments of power instead of space. Many second generation modular products feature evaporative cooling, taking advantage of outside air. A radical shift in data center design takes away the true brick and mortar of a data center, placing modules in an outdoor park, connected by supporting infrastructure and protected only by a perimeter fence. Some modular solutions offer stacking also — putting twice the capacity in the same footprint.

Operations

• Standardization: Seen as a part of the industrialization of data centers the modular solution is a standardized approach to build a data center, much like Henry Ford took towards building cars. Manufactured data center modules are constructed against a set model of components at a different location instead of the data center site. Standardized infrastructure within the modules enable standard operating procedures to be used universally. Since the module is prefabricated, the operational procedures are identical and can be packaged together with the modular solution to provide standardized documentation for subsystems within the module.

• DCIM (Data Center Infrastructure Management): Management of the module and components within is where a modular approach can take advantage of the engineering and integration that was built into the product. Many, if not all of the modular products on the market will have DCIM or management software included that gives the operator visibility into every aspect of the IT equipment, in-frastructure, environmental conditions and security of the module. The other important aspect is that distributed modular data centers will now also be easier to manage. With DCIM solutions now capable of spanning the cloud — data center administrators can have direct visibility into multiple modular data center environments. This also brings up the ques¬tion of what’s next in data center management.

• Beyond DCIM – The Data Center Operating System (DCOS): As the modular data center market matures and new technologies are introduced, data center administrators will need a new way to truly manage their infrastructure. There will be a direct need to transform complex data center operations into simplified plug & play delivery models. This means lights-out automation, rapid infrastructure assembly, and even further simplified management. DCOS looks to remove the many challenges which face administrators when it comes to creating a road map and building around efficiencies. In working with a data center operating system, expect the following: – An integrated end-to-end automated solution to help control a distributed modular data center design. – Granular centralized management of a localized or distributed data center infrastructure. – Real-time – proactive – environment monitoring, analysis and data center optimization. – DCOS can be delivered as a self-service automa¬tion solution or provided as a managed service.

Enterprise Alignment

• Rightsizing: Modular design ultimately enables an optimized delivery approach for matching IT needs. This ability to right-size infrastructure as IT needs grow enables enterprise alignment with IT and data center strategies. The module or container can also provide capacity when needed quickly for projects or temporary capacity adjustments. Why is this important? Resources are expensive. Modular data centers can help right size solutions so that resources are optimally utilized. Over or under provisioning of data center resources can be extremely pricey — and difficult to correct.

• Supply Chain: Many of the attributes of a modular approach speak to the implementation of a supply chain process at the data center level. As a means of optimizing deployment, the IT manager directs ven¬dors and controls costs throughout the supply chain.

• Total Cost of Ownership: – Acquisition: Underutilized infrastructure due to over-building a data center facility is eliminated by efficient use of modules, deployed as needed. – Installation: Weeks and months instead of more than 12 months. – Operations: Standardized components to sup¬port and modules are engineered for extreme-efficiency. – Maintenance: Standardized components enable universal maintenance programs. Information technology complies with various internal and external standards. Why should the data center be any different? Modular data center deployment makes it possible to quickly deploy standard¬ized modules that allow IT and facilities to finally be on the same page.

Categories: DataCenter

Statistics Canada— Labour Force Survey, March 2013 #CRE

April 5, 2013 Leave a comment

 

The Daily — Labour Force Survey, March 2013.

Following an increase the previous month, employment declined by 55,000 in March, all in full time. The unemployment rate rose 0.2 percentage points to 7.2%.

 

1

 

Despite the decline in March, employment was 1.2% or 203,000 above the level of 12 months earlier, with the increase mainly in full-time work. Over the same period, the total number of hours worked also rose by 1.2%.

Provincially, employment declined in Quebec, British Columbia and Alberta, and edged down in Ontario. The only province with an increase was Nova Scotia.

In March, there were fewer people employed in three industries: accommodation and food services, public administration and manufacturing. At the same time, there was little change in the other industries.

There were 85,000 fewer private-sector employees in March, while the number of self-employed rose by 39,000 and the number of public-sector employees was little changed. Compared with 12 months earlier, the number of private-sector employees increased by 1.0% or 111,000, while the number of self-employed was up 2.1% or 55,000 as a result of the gains in March. Public-sector employment was little changed over the 12-month period.

Employment in March decreased among people aged 25 to 54, while there was little change among youths and people aged 55 and over.

Chart 2

 

Provincial summary

Employment in Quebec declined by 17,000 in March, and the unemployment rate rose 0.3 percentage points to 7.7%. Despite this decrease, employment in the province was 1.6% above the level of 12 months earlier, compared with a national growth rate of 1.2%.

Employment in British Columbia was down 15,000, offsetting most of the increase in February. This pushed the unemployment rate up 0.7 percentage points to 7.0%. Compared with 12 months earlier, employment in the province was little changed.

In Alberta, there were 11,000 fewer people employed in March, the first notable decline in more than two years. The unemployment rate in the province rose 0.3 percentage points to 4.8%, still one of the lowest in the country. While there were fewer people working in March, Alberta experienced employment growth of 1.7% on a year-over-year basis.

In Ontario, employment edged down by 17,000 in March, following an increase of 35,000 the month before. The unemployment rate held steady at 7.7%, a result of fewer people participating in the labour force. Year-over-year employment growth in the province was 0.8%.

Nova Scotia was the only province with an employment increase in March, up 2,900, following a similar increase the month before. The unemployment rate in the province was 9.5%. Despite the recent gains, employment was little changed compared with 12 months earlier.

While employment in Saskatchewan was little changed in March, the province experienced the strongest year-over-year growth in the country, at 4.6%. The unemployment rate was 3.9% in March, still the lowest among all provinces.

Industry employment

In March, there were notable employment declines in accommodation and food services, public administration and manufacturing.

Employment in accommodation and food services fell by 25,000, offsetting an increase the month before. This left employment in the industry similar to the level of 12 months earlier.

Public administration employment decreased by 24,000 in March, leaving employment in this industry down slightly from 12 months earlier.

The number of workers in manufacturing declined by 24,000 in March, following a similar decrease the previous month. Employment growth in the spring of 2012 was followed by losses since the summer, leaving employment in this industry down 2.8% from 12 months earlier.

Employment declines among people 25 to 54

Among people aged 25 to 54, employment declined by 47,000, equally divided between men and women. Compared with 12 months earlier, employment for this age group was up 0.6% or 68,000.

Employment among those aged 55 and over was little changed in March. On a year-over-year basis, employment among people in this age group rose by 4.2% or 135,000, partly a result of population ageing.

Among youths aged 15 to 24, employment was also little changed in March, while their unemployment rate increased 0.6 percentage points to 14.2%, as more youths searched for work. Employment among youths has been on a slight upward trend since August 2012.

Quarterly update for the territories

The Labour Force Survey also collects labour market information about the territories. This information is produced monthly in the form of three-month moving averages. The following data are not seasonally adjusted; therefore, comparisons should only be made on a year-over-year basis.

In the first quarter of 2013, employment and the unemployment rates in Yukon and the Northwest Territories were similar to those of the first quarter of 2012. The unemployment rate was 7.6% in Yukon and 8.0% in the Northwest Territories in the first quarter of 2013.

In Nunavut, employment increased by 700 in the first quarter of 2013, compared with the same quarter in 2012, and the unemployment rate fell from 15.3% to 11.4% over the same period.

To Cloud or Not to Cloud? | Cloud Computing Journal | #CRE #CCIM #SIOR #DATACENTER #CLOUD

April 2, 2013 Leave a comment

Today’s IT infrastructure is in the midst of a major transformation. In many ways, the data center is a victim of its own success. The growing number of technologies and applications residing in the data center has spawned increasing complexity, which makes IT as a whole less responsive and agile. While businesses are focused on moving faster than ever, large and complex infrastructure is inherently rigid and inefficient.

As a result, IT is moving outside the traditional data center into colocation facilities and cloud infrastructures – essentially Infrastructure Anywhere. The move to Infrastructure Anywhere is driven by the core objective of improving responsiveness and agility and reducing costs. For example, you can scale up resources through the cloud in minutes, not months. But for all of its benefits, this new Infrastructure Anywhere model presents critical challenges.

To make smart decisions about where to run applications and what kind of resources you need, you first must understand your workload: utilization, capacity, and cost. Gaining unified visibility is difficult when your application workloads are distributed across data centers and colocation facilities in different parts of the country or around the world. With limited visibility, how do you accurately align resources and capacity with workloads for efficient processing, cost control, and – more important – achieve the full business value of your IT investment?

It’s All About Agility
According to the results of Sentilla’s recent survey of data center professionals about their plans for cloud deployments, agility and flexibility are the top drivers behind enterprise IT transformation initiatives such as cloud deployments – followed closely by issues of capacity and cost.

Figure 1: Key drivers for cloud computing initiatives

While agility is the prime motivating factor, the importance of cost as a factor should not be ignored. According to the survey, the major resource limitation experienced by respondents – for all infrastructure initiatives – is budget.

Figure 2: Resource limitations

Note that several of the reported constraints (personnel, storage capacity) are related to the broader issue of budget. In this sense, cost is overwhelmingly the most important constraint on IT initiatives – including cloud initiatives.

2013 Is for Planning, 2014 for Deployment
Of the organizations surveyed, nearly 50 percent plan to be deploying cloud initiatives in 2014. Many are in the planning phases. In general, we can expect cloud computing deployments to increase by 70 percent in 12 months:

Figure 3: Data center cloud initiatives, by year

Similarly, those surveyed expect to gradually migrate more workloads to cloud platforms in the coming years – with 28 percent planning to run more than half of their applications in the cloud by 2014. The barriers to cloud migration are lowering.

Figure 4: Percentage of applications planned to move to the cloud, by year

The Cloud Isn’t a Homogeneous Place
Cloud computing can refer to several different deployment models. At a high level, cloud infrastructure alternatives are defined by how they are shared among different organizations.

Figure 5: Where respondents planned to deploy cloud initiatives

Private clouds offer the flexibility of the elastic infrastructure shared between different applications, but are never shared with other organizations. Hosted on dedicated equipment either on-premises or in a colocation provider, a private cloud is the most secure but the least cost-effective cloud model.

Public cloud infrastructure offers similar elasticity and scalability and is shared with many organizations. This model is best suited for businesses requiring managing load spikes and scaling to a large number of users without a large capital investment. Amazon Web Services (AWS) is perhaps the most widely deployed example of public cloud infrastructure as a service.

Hybrid cloud offers the dual advantages of secure applications and data hosting on a private cloud and the cost benefits of keeping sharable applications and data on the public cloud. This model is often used for cloud bursting – the migration of workloads between public and private hosting to handle load spikes.

Community cloud is an emerging category in which different organizations with similar needs use a shared cloud computing environment. This new model is taking hold in environments with common regulatory requirements, including healthcare, financial services and government.

The research showed that organizations are evaluating a broad range of different cloud solutions, including Amazon AWS, Microsoft Azure, Google Cloud Platform, and Red Hat Cloud Computing, as well as many solutions based on OpenStack, the open source cloud computing software.

Without planning, ad hoc cloud deployments combined with islands of virtualization will only add complexity to the existing data center infrastructure. The resulting environment is one of physical, virtual and cloud silos with fragmented visibility. While individual tools may deliver insight into specific parts of the infrastructure puzzle (physical infrastructure, server virtualization with VMware, specific infrastructure in a specific cloud provider), IT organizations have little visibility into the total picture. This lack of visibility can impede the IT organization’s ability to align infrastructure investments with business needs and cost constraints.

Infrastructure Complexity Is the New Normal
While it aims to bring agility to IT, the process of cloud transformation will only increase infrastructure complexity in the near term. IT organizations must manage a combination of legacy systems with islands of virtualization and cloud technologies.

When asked about where cloud infrastructure will reside, survey respondents indicated that they will be managing a blend of on-premises and outsourced infrastructure, with the balance shifting dramatically from 2013-2014.

Figure 6: Where cloud infrastructure will reside, by year

The Need for Unified Visibility into Complex Infrastructure
As you plan your own cloud initiatives, you must prepare for multiple phases of transformation:

  • Deploying new applications to the cloud as part of the broader application portfolio
  • Migrating existing applications to cloud infrastructure where possible and appropriate
  • Managing the hybrid “infrastructure anywhere’ environment during the transition and beyond

To support these phases, you need visibility into workloads and capacity across essential application infrastructure – no matter where it resides. From the physical and virtual resources, up through applications and services, you will need insight so you can align IT with business objectives.

Figure 7: The need for infrastructure insight at all levels

Essential Infrastructure Metrics for Right-Sizing Infrastructure
Decisions about which applications to deploy to the cloud and where to deploy them will require visibility into:

  • Historical, current and predicted application workload
  • Current and predicted capacity requirements of the workload
  • Comparative cost of providing that capacity and infrastructure on different platforms

For application migration scenarios, you will need to understand the actual resource consumption of the existing application. Whether it’s a new application or a migrated one, you will need to ‘right-size’ the cloud infrastructure to avoid the twin dangers of over-provisioning (and wasting financial resources) or under-provisioning and risking outages or performance slow-downs. You will need insight into:

  • Memory utilization
  • CPU utilization
  • Data transfer/bandwidth
  • Storage requirements

You will also need good metrics about the cost of running the application in your existing data center, as well as the predicted costs of running that same application on various platforms. These metrics need to factor in the total cost of the application, such as:

  • Personnel for supporting the application
  • Operating system
  • Management software
  • Cooling and power
  • Leased cloud
  • Server and storage hardware

To accurately predict the cost of running the application on cloud-based infrastructure, you will need accurate metrics around the actual, historical resource consumption of the application (storage, memory, CPU, etc.) as it maps to billable units by the provider. By understanding the actual consumption, you can avoid over-provisioning and overpaying for resources from external providers.

Infrastructure Analytics for Cloud Migration
For any given application that is a candidate for the cloud, you want to be able to compare the total cost of the resources required across different options (public, private and on-premise).

While you could try to manually crunch these numbers in a spreadsheet, the computations are not trivial. These are decisions that you will need to make repeatedly, for each application, and revisit when an infrastructure provider changes its cost model or fee structure. For that reason you’ll want a tool that lets you get an accurate and continuous view into current costs and model “what-if” scenarios for different deployments.

Continuous Analysis for Continuous Improvement of the “Infrastructure Anywhere” Environment
Before deploying an application, what-if scenarios help you make sound resource decisions and right-size applications. After deploying, continuous analysis is key to ensuring that you are optimizing capacity and using resources most efficiently.

While individual tools may already give you slices of the necessary information, you need integrated insight into the complete infrastructure environment. Again, emerging infrastructure intelligence can assemble necessary information from applications and assets that are both on and off your premises, virtualized and not, in different platforms and locations.

Figure 8: Transformational analytics

The software can provide ‘single pane of glass’ visibility into assets and applications throughout the physical, virtual, and cloud infrastructure, including:

  • Application cost/utilization spanning different locations
  • True resource requirements of apps (for more accurate provisioning in cloud infrastructure)
  • CPU and memory utilization of apps, wherever they reside

Summary
By 2014, enterprise computing will look quite different than it does today, yet many legacy systems and infrastructure will still be with us. IT operations, business units and application architects will need to manage applications that reside in infrastructure that spans on-premise and offsite locations, with public, private, hybrid, and community cloud infrastructure. Data centers will be just one part of the total pool of infrastructure that IT manages on behalf of the business.

To manage this transformation, you will need to make smart decisions about where workloads should reside based on specific application and business needs. As these changes roll out, you will need to manage the transforming and hybrid application infrastructure to deliver the necessary performance and service levels, no matter where applications reside.

IT organizations need the insight to make fast, smart and informed decisions about where workloads and data should reside and how to deploy new applications. Rather than isolated silos of metrics and capacity and utilization data, IT needs unified visibility into infrastructure across the virtual computing environment – both on-premise and off. And they need the metrics and continuous analysis to manage the evolving infrastructure in a manner aligned with business objectives.

An emerging category of infrastructure intelligence can provide the continuous and unified analytics necessary to understand and compare your decisions and to manage the data center during the transformation. With broad ‘infrastructure insight’ you can align cloud platforms with business needs and cost requirements – delivering the agility to realize new revenue opportunities with the insight to contain the costs of existing applications.

To Cloud or Not to Cloud? | Cloud Computing Journal.

%d bloggers like this: