trc networks business telephone systems
trc networks on twittervoip telephones rss feed

Monday, September 26, 2011

Cisco Survey: The Mobile Cloud Office Generation

The Internet Is Fundamental Resource for the Humankind, Cisco Survey

Future leaders, workers, and customers will rely increasingly on cloud resources

More than half of all students and young professionals consider Internet as an “integral part of their lives,” according to Cisco’s Connected World Technology Report 2011. In fact, the report’s findings are comparable and somewhat similar to the results published in 2010, when the first such report was conducted. The next generation of leaders and workers is so accustomed to a Internet-rich life that the years to come would witness growing number of connected devices and gadgets, more mobile lifestyle and booming cloud market to meet the fast growing expectations of young people not to rely on fixed storage devices for their data and software applications they use.

The Internet is now a fundamental resource for the humankind with 33 percent of those polled considering it is of equal importance to their daily life as air, water, food, and shelter, according to the survey. Almost half of the respondents, or 49 percent of college students and 47 percent of employees, younger than 30, believe the World Wide Web and the Internet are “pretty close” to the level of importance water, food and shelter have for the human race. Overall, four of every five college students and young professionals is of opinion the Internet is vital part of their daily life although it would be interesting to see a study, asking questions why and how the Internet is vital for young peoples’ lives.

The majority of young employees, 62 percent, and 55 percent of college students polled believe their life’s daily sustenance is in jeopardy if they are denied access to the Internet, while 64 percent of students would select an Internet connection instead of a car. A very interesting choice indeed, especially in the light of global environmental issues and air pollution that cars produce globally.

Online interaction is becoming integral part of the lifestyle of the next generation with 27 percent of college students preferring to update their social network profiles, while social networks are more important to them than partying, dating, listening to music, or hanging out with friends. On paper, it should be good news for online social networks that compete fiercely in the cloud but it is an alarming trend too for contacts in person cannot be substituted by online chats, psychologists agree.

The coming customers are going to be increasingly mobile after the study found that 66 percent of college students and 58 percent of young professionals consider their mobile devices like laptops, tablets, or smarthphones “the most important technology in their lives”. Therefore, we are to witness an unprecedented growth of mobile devices and applications market with cloud services destined to play an important role in this mobile revolution due to the overwhelming volume of data people are going to store, share, and use online.

Moreover, smartphones and desktops are now equally important to the next generation: 19 percent of students believe their smartphone is their “most important device” they use daily while 20 percent consider their desktop as their ultimate device. Thus, hardware and software vendors will be forced to switch to a new type of highly mobile customers that will ask for more power, applications, functionality, and productivity offered by their smartphones. Part of the solution is shift to cloud-based services but telephone makers will be forced to seek new hardware solutions as well.

In fact, corporations would be forced to change their business and Internet strategies, including cloud adoption and cloud services, much faster than expected, according to Marie Hattar, vice president, Enterprise Marketing, Cisco. “The results of the Cisco Connected World Technology Report should make businesses re-examine how they need to evolve in order to attract talent and shape their business models. Without a doubt, our world is changing to be much more Internet-focused, and becomes even more so with each new generation,” she said in a statement.

If you believe that the Internet with its interconnected devices, cloud services, and vast amounts of data stored and shared in the cloud is as important as food and water, then you probably belong to a future generation. A generation whose idea what in the life on Earth is important and vital for the survival of the human race is very dissimilar to the mindset of many generations that inhabited this planet for centuries.

Anyway, IT companies and cloud service providers should be encouraged by the findings in the 2011 Connected World Technology Report 2011, while employers should look for new methods to attract their future employees with more than 60 percent of students polled last year not believing that work in a “classic” office adds to their productivity. Instead, they prefer to work from home or public places, connecting online to virtual offices while storing and accessing their data in the cloud. Thus, the next generation will rely on cloud services to an alarming degree, testing the abilities of both hardware and software vendors to provide, maintain, and develop a global network featuring unparalleled capabilities.

By Kiril Kirilov http://www.cloudtweaks.com

Avaya Opens New Office in Shenzhen, China

BEIJING, China — Avaya, a global provider of business communications and collaboration solutions and services, announced today it has opened a new office in the city of Shenzhen in the Guangdong Province of China. This new office will help Avaya support local customers and channel partners, and effectively manage business in the city and nearby areas.

The new office, located in Kerry Plaza, Shenzhen, is the sixth Avaya office in China, following Dalian, Chengdu, Guangzhou, Shanghai, and Beijing. Its major functions will include sales, marketing, as well as customer services.

Shenzhen, which now has a population of over 10 million, has been one of the forerunners in China’s economic reform. It is one of the most dynamic cities of China, and ranks number four in the country in terms of economic size. Shenzhen is also the headquarters of many large multinational and domestic companies.

“The new office in Shenzhen gives us a local presence for this important and dynamic city,” stated John Wang, Managing Director, Greater China, Avaya
“We look forward to working closely with enterprises of all sizes in this region, with solutions designed to help them enhance productivity and customers service through the latest enterprise collaboration and communications tools. We continue to work closely with local channel partners and a broad channel ecosystem to serve this growing market.”

Cisco’s Cius tablet is all business

Source: http://www.techcentral.co.za

With all its rivals focusing their tablet computing offerings squarely at consumers, networking giant Cisco is taking a bet that its tablet offering, the Cius, will appeal to the corporate set. The product, which is now on sale in SA, is only available to business customers.

The seven-inch Android-based tablet has a 1,6GHz Intel Atom processor, a back-lit, multi-touch LCD screen with a resolution of 1024×600 pixels, 32GB of storage, front and rear cameras both capable of 720p video (and offering support for all Cisco TelePresence videoconferencing) and weighs just 535g.

At launch, there is only a Wi-Fi model available, but mobile broadband models will follow soon. The Cius includes a microSD slot for expanding the on-board memory. This can also be secured so the card can’t be read when used in a device other than the Cius.

Despite its capable hardware, the Cius runs the somewhat outdated Android 2.2. The company says this is because of the need for high security specifications and that there will be support for Android 3.1 in future.

Regarding the decision to go for a 7-inch screen, Cisco engineer Leon Wright says the company decided seven inches was the ideal combination of functionality and portability.

Click image to enlarge

Considering its target market, it’s not surprising the Cius offers virtual desktop integration, a unified communications porfolio, cloud centralisation of device management, and high-definition videoconferencing support, all of which is complemented by its “contacts-driven” user interface. The Cius also ships with QuickOffice preinstalled for viewing and editing all major document formats.

It offers operating system and file system-level encryption, password management, the ability to push and enforce policies remotely, the large-scale provision of services across multiple devices and the ability to remote lock or wipe the device – or a batch of them – on demand.

The Cius connects to a company’s existing telecommunications network via an optional docking station, which supports fixed-line telephony, includes a full duplex speaker for hands-free use, two USB ports for a keyboard and mouse, an HDMI-out port for extending video capabilities to an external display, and a power-over-Ethernet port for networking and charging.

Another feature of the Cius that has yet to make its way to consumer tablets is the detachable battery that offers an anticipated eight-hour battery life and can be swapped out with a spare if necessary.

Cisco has also created an application store that it calls App HQ, which is complementary to the Android Market. Apps HQ hosts applications specifically designed for the Cius and tested by Cisco.

It is also possible for system administrators to limit which apps users can install, and can do this per user or per group of users – for example, the sales team can all be granted access to the same apps.

System administrators can even create a custom marketplace with handpicked apps for their particular business.

The Cius costs US$750, but Cisco says this price can be driven down to around $650 in the case of bulk orders (rand pricing is not available). That’s a hefty price tag, particularly considering the dock and any additional equipment, like the carrycase, send the price even higher.

Despite the high price, Cisco should enjoy some demand for the Cius from those who do a great deal of teleconferencing. — Craig Wilson, TechCentral

Wednesday, September 21, 2011

When Transitioning to VoIP, Put the Plumbing First

By Bryan Johns | http://www.nojitter.com

In my day job, I spend a lot of time debating the benefits of open source IP PBX solutions versus their proprietary competitors. There are a lot of options in the IP PBX market today and amongst those there are both good and bad products in both the open source and proprietary vendor segments. Independent of the PBX equipment decision, many aspects of a network impact the success or failure of a VoIP implementation project. Prior to my work at Digium (sponsor of Asterisk and Asterisk SCF), I spent my days building VoIP networks for corporations and carriers, and over my decade in that business I learned a very simple lesson: always put the plumbing first.

By plumbing, I mean the network-layer media, equipment and services that handle everything except making and taking phone calls. This realm includes your physical infrastructure and cabling, your switching fabric, your routing environment and your service provider networks. Deficiencies at any of these layers can render the best IP PBX on the planet a spectacular failure. It is important to remember that when you are building a network to support media, you are building for user experience and this raises the bar for performance and management across your network infrastructure.

Here are a few specific recommendations to consider when you are setting out to deploy media (voice or video) across a network of any size.

Build a Media-Capable LAN
A media-capable LAN is a local network infrastructure that has the ability to prioritize and protect real time media moving within it. In order to achieve this design, a network must have a switching fabric that, at a minimum, supports the prioritization of certain traffic types via differentiated services (diffserv) and the segmentation of networks via VLANs (802.1Q). Real time media should be insulated from the balance of data traffic on your LAN and switched with priority to ensure the highest possible quality, to facilitate troubleshooting and to provide a quality user experience.

Give Media Its Own Route
In the same way that it is important to segment real time media from other traffic inside of your local network, this traffic must also be separately routed and managed at the gateway to your company’s network. If your real time media is consolidated with all of your other traffic traversing your gateway, you have no means of selectively troubleshooting performance issues when they arise. At a bare minimum, you should route real time media at its own IP address, but if you have the right equipment you can route media on its own interface and gain better control for performance and management.

Buy Services Based on Network Analysis
Over the last five years there has been a dramatic increase in the quantity and types of vendors offering bring-your-own-bandwidth VoIP services over the Internet. When selecting a provider for VoIP services, it is important to investigate the quality of connection between your company’s network and the network of your proposed vendor(s). By taking a close look at network performance attributes between you and your provider such as router hops, jitter, latency and voice quality beyond your provider’s network, you can qualify or disqualify potential vendors based upon their performance and save yourself headaches down the road associated with poor voice quality.

Use Good Equipment
It has been my experience that equipment quality can have a significant impact on the perceived performance of a VoIP infrastructure. For example, a cheap headset or handset can cause issues that are easily misinterpreted as network performance concerns. This can send your network admin and your provider into a fruitless support effort that might or might not be successfully traced back to the cheap equipment. The amount of money that can be saved buying cheap equipment pales in comparison to the amount of lost productivity that you can incur trying to troubleshoot and eradicate quality issues for your users. By buying quality equipment, you can take a long list of potential contributors off the list for your support resources and focus on those things most likely to help them address a problem.

This is by no means a comprehensive list of considerations when transitioning to VoIP. There are many factors that come together to dictate the success or failure of a VoIP implementation project. However, these points are the stand-out contributors to VoIP network performance in my years of experience in that business. Just because the technology and the services around it offer the promise of being less expensive to operate that does not mean that you should deploy the least expensive equipment or service options. Do your research, be selective and don't sell yourself short and you'll find that a conversion to VoIP infrastructure and services will pay off to your expectations.

Global PBX Sales Surged to $59 Billion in 2010

By John Malone, Eastern Management Group http://www.nojitter.com


2010 was a PBX watershed year. It was a watershed not because businesses once again began acquiring PBX systems after being sidelined in 2009. Yes, that happened. But 2010 was a watershed because the force driving new PBX purchases was businesses' incentive to cut expenses. This is both counterintuitive and unique.

Businesses didn't have less to do in 2010, although many countries' economies suffered; they just had fewer people to do it. Unemployment was high in almost all developed countries. It was higher in every OECD (Organization for Economic Cooperation and Development) country, with the exception of Luxembourg and Germany (see figure below).

Buying a PBX in 2010 equated to a vote for ratcheting-up employee productivity. A new PBX, costing $1,200 per seat on average, gave companies permission to hold the line in other areas. That might be said to include payroll.

Productivity improvement drove 38% of all PBX purchases in 2010, according to an Eastern Management survey of IT managers. This was 15% more than those who got a new PBX to replace an "old one". Moving and company expansion were responsible for only a modest proportion of PBX sales worldwide.

Interestingly, despite the fact the developed world's economy remains stalled, global PBX purchases may continue to show resiliency. For how long? 2010 may just be the early stage of a PBX upgrade cycle for businesses that, barring the bottom falling out, could last for years.

There is clearly a PBX upgrade cycle going on. Eastern Management Group's latest survey of IT managers finds that depending on the size of business, between one-third and one-half of all organizations are already in a PBX upgrade cycle.

Is Your Business in a PBX Upgrade Cycle?

Global PBX Market
The global market for PBX systems in 2010 was $59 billion. This is $7.5 billion larger than 2009. We ascribe the growth to investments in improving employee productivity.

Global PBX shipments increased 14% to a total of 50 million lines or seats. While 2010 sales were strong in North America, growth in that region lagged behind both EMEA and APAC, which remained ahead in total sales as well as Y/Y growth (see pie chart below).

PBX Sales By Region
50 Million Lines Sold Worldwide in 2010

Source: The Eastern Management Group

Market Leaders
Seven companies accounted for about 80% of 2010 PBX sales. Cisco had the largest global market share at 19%. Avaya was second, controlling 15% of the market. Avaya's achievement was noteworthy because it represented a significant improvement over the company’s 11% global market share in 2009. Avaya's surge reflected a 54% year over year increase in PBX shipments for the company, including add-on lines to old Nortel systems. Avaya, which closed its Nortel acquisition in December 2009, had little sales assurance entering 2010, before elbowing its way to success.

Both Cisco and Avaya's 2010 global market share are less than their shares of the North America market which were 35% and 23% respectively. Why the difference? While both players have good distribution outside North America, hometown teams tend to perform well in their own markets.

In 2010 Alcatel-Lucent commanded an estimated 55% share of the market in its home, France. Siemens, a colossus in Western Europe, commanded 16% of the overall EMEA market in 2010.

In APAC, NEC and Panasonic collectively controlled almost half the market. We estimate Panasonic sold 52% of its worldwide shipments in Japan. Panasonic is the world's third largest supplier of PBXs and the dominant player offering inexpensive phone systems to small companies and office branches worldwide.

Aastra played out a novel strategy that gave the company 15% market share in EMEA. It achieved this by largely eschewing global strategies and tactics in favor of local everything. Operating with a decentralized model, Aastra's 2010 value was responsiveness to the local markets and the customers of companies Aastra acquired. These businesses effectively remain intact, retaining former product lines, installed base and distribution channels, even as they come under the ownership of Aastra. While Aastra comes across as a local play, it is also represented on global opportunities, since the Ericsson acquisition set Aastra up in 100 countries.


Source: The Eastern Management Group

9 out of 12 PBX Companies Did Well in 2010
A 2010 survey of IT managers by Eastern Management found that 56% acquired a new PBX only after receiving multiple proposals. This shows the majority of 2010 sales were up for grabs all year long. Put another way, 2010 assured no PBX company a place at the table at the onset of the year. Avaya and Cisco, while strong, faced off against what became impressive sales gains by NEC, Mitel, ShoreTel and others. At the end of the year 9 of the 12 leading PBX companies had increased their sales and fortunes.

How Eastern Management Analyzes the PBX Market
Eastern Management's 2010 PBX market analysis is based on 8,000 interviews with IT managers in more than 150 countries. Data gathering and fact checking is supplemented with the cooperation of PBX manufacturers and distributors. Eastern Management proprietary models and Monitor platform are used to identify 40,000 line items of PBX sales data by business size, industry and geography.

About Eastern Management:
The Eastern Management Group is one of the world's premier strategic research companies. By delivering product research, market research and analytical tools to clients, Eastern Management facilitates decision making by IT Professionals and Technology Companies.


Putting the ‘Unified’ in Unified Communications

Putting the ‘Unified’ in Unified Communications
Polycom has rebranded its Intelligent Core platform as RealPresence, as part of its push to encourage interoperability between competitive systems in the Unified Communications (UC) sector.

Collaboration based on open standards is seen as a key policy for the company, with bosses setting an objective of growing revenue from US$1.5 billion to US$3 billion ‘in the next few years’ on the back of the strategy.

Michael Chetner, vice president of APAC for Polycom, says recent technology trends have suited Polycom’s open standards approach.

"One of the biggest drivers in the market is mobility,” Chetner says.

"Now everyone has a smartphone or an iPad, that’s driving a lot of demand.”

Chetner says this is due to the network effect, whereby users of a service benefit from the number of other users.

"We’re at a tipping point now. Everyone has these devices and they want to be able to talk to each other as simply as they would make a local call between two different networks.”

Polycom already integrates with Microsoft’s Lync, as well as with the Engage platform offered by ‘social business’ software company, Jive.

"Our platform is open to be integrated,” Chetner says.

"Whether that be a telco or a social media player, as long they’re based on standards it’ll work with our system.

"It will only be the others putting barriers up.”


Source: http://www.techday.co.nz

New! Plantronics MDA200

plantronics-mda200.jpg
Plantronics is launching a new line of devices designed to help companies migrate from traditional desk phone-only usage to a unified communications (UC) infrastructure, including a new a multi-device adapter, the MDA200, as well as corded and wireless USB adapters.

The goal of these Plantronics "UC Enabler" products is to leverage your existing investment in traditional voice communications, such as desk phones, headsets and audio communication devices. The MDA200 is a multi device switcher that instantly turns any corded or wireless USB headset into a multipoint headset. The MDA200 lets you answer, end, hold and switch between calls from any connected device with a press of a button. It supports Bluetooth and DECT wireless headsets.

plantronics-mda200-wired.jpg
MDA200 Wired USB headset configuration to deskphone & USB to PC

The goal of the MDA200 is so that you can use your existing corporate desktop phones, headsets, and even softphone applications, while also enabling the employee to use their existing personal or corporate-issued) Bluetooth headset paired to their mobile device. This enables corporations to deploy softphones from leading UC vendors including Avaya, Cisco, IBM, and Microsoft and leverage existing audio devices. The UC Enabler Category also includes a range of USB adapters, for both corded and wireless headsets, that are designed to turn nearly any Plantronics headset into a USB device.
plantronics-mda200-unit.jpg
UC Enabler Portfolio


• MDA200 – a multi-device adapter that connects to a deskphone and a computer.
MSRP $129.95

• DA Series - adapters that convert Plantronics corded audio devices to USB.
MSRP $59.00 - $120.00

• BT300™ – a wireless Bluetooth USB adapter that connects a Bluetooth headset to a computer
MSRP $99.95

• D100™ – a wireless DECT™ USB adapter that connects any Plantronics Savi or CS500 DECT headset to a computer
MSRP $120.00

PBX and IP PBX Sales While Cisco Avaya and NEC Earn Top Spots for Growth

PBX Sales Climb Worldwide According to Eastern Management Group
Countries Spend Heavily on Phone Systems Ignoring Growing Unemployment

NEW YORK, Sep 20, 2011 (BUSINESS WIRE) — –Cisco Avaya and Panasonic Lead World in PBX and IP PBX Sales While Cisco Avaya and NEC Earn Top Spots for Growth

Unemployment increased throughout the world in 2010, but so did PBX and IP PBX sales. Purchases of business phone systems jumped 16% in North America and 14% worldwide to $59 billion. Unemployment according to the OECD was 8.5% in 2010 up from 6.1% in 2008.

Almost 40% of PBX and IP PBX sales were to businesses wanting to increase employee productivity, which was the largest catalyst driving 2010 sales, according to an Eastern Management Group survey of more than a thousand IT managers.

Averaging $1200 per phone, buyers made PBX and IP PBX purchases either expecting or following reductions in operating budgets. Unlike older phone systems, new PBX and IP PBX systems are capable of boldly reducing operating expenses and improving employee productivity.

Of the world’s 12 largest PBX and IP PBX companies, nine increased sales in 2010, including Alcatel-Lucent, Mitel and ShoreTel. Each of the 12 sells PBX and IP PBX systems that deliver or support unified communications and collaboration.

Unified communications is technology that improves employee productivity and reduces operating expenses. UC&C lets employees work at home, share telephone features with colleagues around the world, communicate over multiple devices, or in industry parlance “collaborate.”

Despite the economy, global PBX purchases should continue to show resiliency according to The Eastern Management Group. Customer surveys show a PBX upgrade cycle for businesses that could last for years has just begun.

Is Your Business in a PBX Upgrade Cycle?
Yes No
— –
Under 500 employees…… % 52 % 48
501-1,000………………….. 52 48
1,000 and 5,000………….. 49 51
5,000 and 10,000………… 34 66
10,000 and 19,000………. 38 62
20,000 +…………………… 48 52
— –

Asia Telecom Statistics and Forecast, 2006-2015

DUBLIN–(BUSINESS WIRE)–Research and Markets(http://www.researchandmarkets.com/research/e60541/asia_telecom_stati) has announced the addition of the “Asia Telecom Statistics and Forecast, 2006-2015” report to their offering.

The Asia Telecom Statistics and Forecast report will provide you with a whole range of statistics, forecasts and graphs for the years 2006-2015F (with actuals including 1Q 2011), crucial to your competitive and strategic analysis and planning.

You can also purchase a 1-year subscription, providing you with quarterly updates. This new report includes the following indicators on a per country and a regional basis:

  • Population
  • GDP per capita
  • Telecom Service Revenue
  • Fixed telephone subscribers, penetration and growth
  • Mobile telephone subscribers, penetration and growth
  • Internet subscribers and users, penetration and growth
  • Broadband subscribers, penetration and growth
  • Inbound and Outbound International telephone traffic
  • Termination rate originating in the US

Countries Covered:

  • Australia
  • Pakistan
  • Bangladesh
  • China
  • Hong Kong
  • India
  • Indonesia
  • Malaysia
  • New Zealand
  • Philippines
  • Singapore
  • South Korea
  • Sri Lanka
  • Taiwan
  • Thailand
  • Vietnam

More at http://www.researchandmarkets.com/research/e60541/asia_telecom_stati.

Friday, September 16, 2011

Cisco's Chambers wants to go out on a high note whenever he leaves Read more: Cisco's Chambers wants to go out on a high note whenever he leaves

Source: http://www.fiercetelecom.com



With over 16 years at the helm of Cisco (Nasdaq: CSCO), speculation has begun to mount as to when John Chambers will hang up his CEO hat.

Cisco faced a number of challenges this year that drove Chambers to make some bold decisions to cut costs and set a new growth path.

Feeling like Cisco lost its focus on its core routing and switching business--one where competitors like Juniper have been happy to attack--Chambers conducted a large-scale reorganization of the company in May, shutting down non-core businesses including its Flip camera unit and laying off nearly 13,000 employees.

The company's reorganization came at a time when a number of its key executives have left the company.

Following the departure of Mike Volpi, Cisco's former head of charge of service providers and routing, February 2007, Charles Giancarlo, chief development officer, left. More recently, Cisco lost Charles Carmel, the executive who became known for driving Cisco's mergers and acquisition strategy, has decided to return to his investment banking roots by joining private equity firm Warburg Pincus.

But even with the layoffs, Cisco isn't totally turning its back on buying other companies that add specific value to its portfolio. In late August, Cisco decided to bolster its network and service management technologies across its flagship platforms, including the CRS-3 by purchasing the AXIOSS software assets from OSS vendor Comptel.

Despite these departures and having to conduct an aggressive turnaround plan, Chambers still has the support of the company's board and maintains he'd like leave the company high note. Possible successors that have been floated include current Oracle President Mark Hurd, who came to that company after he resigned from HP in the wake of allegations of sexual harassment.

"In terms of the board and the management team, we're completely in sync," Chambers said in a Reuters article. "They asked me personally would I be willing to commit to another three years."

Speaking during the company's highly anticipated financial analyst conference, Chambers gave a positive yet more conservative long-term outlook. In its revised long-term guidance, Cisco forecast revenue growth of 5 to 7 percent, down from its previous 12-17 percent forecast.

For the next three years, Cisco forecast earnings growth of 7 to 9 percent and operating margins in the "mid-20s" percentage range.

While the conservative outlook did give some analysts pause, it also represents a more realistic company outlook. "Everybody knew the old targets were off the table," said Colin Gillis, BGC Partners analyst. "It's not a surprise, it's not as bad as it could have been."

For more:
- Reuters has this article
- Bloomberg has this article

New IP Desktop Phone Portfolio From snom technology

By Robbie Pleasant UC Strageties

Snom technology AG has recently announced the release of snom UC. This is the latest edition of the snom IP desktop phones, and is designed for unified communications as well as qualified for use with Microsoft Lync Server 2010.

Included in the new line is the snom 821 UC edition and the snom 300 UC edition, which are compatible with both Microsoft Lync and other SIP based PBX's, as well as the snom UC600, a USB desktop phone that works with the Lync soft client.

The snom UC600 allows users to make telephone calls from their computer as well as a handset and keypad. It comes with an embedded client for Lync 2010, providing users with simple deployment and endpoint management.

snom UC comes with two standards-based SIP phones from snom's VoIP phone portfolio. The first of them, the snom 821, offers expanded functionality for business applications, such as a high-resolution TFT color display, presence display and settings, and server side address book search, among others.

The second of the SIP phones is the snom 300 UC edition, offering an entry level IP desktop phone with advanced functionality. It comes with multiple lines, programmable keys, Lync presence, and many more features.

"Snom has been very active in having their phone devices support solutions from all of the major manufacturers, and adding qualified support on Microsoft Lync makes enormous sense," says David H. Yedwab, UCStrategies UC Expert. "It makes the choices of endpoint phone devices a richer choice for users and adds an additional decision point for both customers and channels – which phones/devices should we support."

The snom UC600 is available for $149. The snom 821 UC edition is available for $249, and 300 UC edition for $129. For more information, visit www.snom-uc.com.

41% of Enterprise Communications Application Users Worldwide Migrate to the Cloud by 2016

NEW YORK–(BUSINESS WIRE)–In the rapidly evolving enterprise communications market, CPE vendors confront imminent erosion in their installed base as cloud services gain traction across the public, private, and hybrid cloud domains. By 2016, 41% of all enterprise communications users, or 386 million lines/seats, will be on virtual infrastructure, posing a serious danger to the CPE market. “For CPE vendors, the cloud threat is real,” says senior analyst Subha Rama. “By 2016, the communications CPE market will only grow 4.3%, while cloud communications will grow by over 21%, reaching $8 billion in revenues.”

“Enterprise mobilization is also driving migration to the cloud”

Smaller vendors with point solutions will see cloud services rapidly displace their installed bases. Large vendors are becoming cloud providers or key enablers as well. However, many of the CPE solutions are simply not cloud ready and will see performance downgrades when virtualized.

There are three prominent forces influencing cloud migration:

  1. The growing adoption of data center architectures and virtualization technologies
  2. The need to integrate multiple applications to deliver the connected experience to users across different devices, including smartphones and media tablets
  3. The promise of lower costs and increased efficiencies from standardized platforms and processes in the cloud

Enterprises are adopting a non-linear approach to cloud migration; while certain applications undergo experimentation, others are retained on premises. Mixed environments and hybridization are becoming the norm, especially with larger enterprises. However, the technology to manage hybrid clouds and to enable seamless movement of applications instances across different vendor clouds is in its infancy.

“Enterprise mobilization is also driving migration to the cloud,” says practice director Dan Shey. “Cloud applications ease application delivery for businesses that are increasingly relying on access across fixed and mobile endpoints.”

The “Enterprise Cloud Applications and Vertical Analysis” (http://www.abiresearch.com/research/1008155) report offers extensive review of the migration of premise-based communications access to the cloud. Central to the analysis is quantifying the rate of CPE displacement by cloud services. Market forecasts are provided for four implementation types—CPE, private, public, and hybrid clouds—for each application type: telephony, email and collaboration, audio conferencing, web conferencing, video conferencing, and UC.

It is part of the Enterprise Mobility (http://www.abiresearch.com/products/service/Enterprise_Mobility_Research_Service) research service.

ABI Research provides in-depth analysis and quantitative forecasting of trends in global connectivity and other emerging technologies. From offices in North America, Europe, and Asia, ABI Research’s worldwide team of experts advises thousands of decision makers through 27 research and advisory services. Est. 1990.

More at www.abiresearch.com

Polycom Unveils Software Strategy to Drive Global Adoption of HD Video Collaboration Through Open Standards

PLEASANTON, CA–(Marketwire – Sep 14, 2011) – Polycom, Inc. (NASDAQ: PLCM), the global leader in standards-based unified communications (UC), today unveiled a software strategy to bring secure HD video collaboration to the broadest range of business, video, mobile, and social networking applications through standards-based infrastructure delivered on-premises, hosted, or with service providers from the “video cloud.” Polycom believes this strategy will redefine the unified communications market, accelerate the adoption of Polycom software (from infrastructure to the edge), and establish Polycom as the default choice of customers and partners for open UC and HD video collaboration solutions that work together seamlessly across any application, protocol, call control system, or end point. The software strategy is a key component of Polycom’s growth strategy with the objective of growing revenue from the company’s current run rate of approximately $1.5 billion to $3 billion in the next few years.

Polycom’s software strategy spans four key initiatives:

1. Deliver the Most Complete, Interoperable Software Platform for Universal HD Video Collaboration

2. Expand Partner Ecosystem and Bring HD Video Collaboration to Mobile and Social Platforms

3. Create the First Open HD “Video Cloud” Exchange with Service Providers

4. Continue to Set the Standard with Software Innovations that Deliver Exceptional User Experiences and Transform the Way We Work and Collaborate

Fundamental to Polycom’s strategy is a commitment to open standards-based interoperability across all of the elements in a communications environment, including multiple solutions, vendors, networks, and connection protocols. This gives customers the freedom to choose best-of-breed solutions for instant messaging, presence, call control, Web conferencing, video collaboration, mobile, and social video, from any of hundreds of different vendors, with assurance that the solutions will:

  • Seamlessly interoperate within normal business workflows;
  • Be backward-compatible with legacy investments and forward-compatible with new, emerging systems;
  • Be protocol-agnostic and able to traverse signaling and media protocols such as H.263, H.264, H.264 High Profile, SIP, SVC, VP8, TIP from Cisco, and RTV from Microsoft Corp.; and
  • Deliver the essential system scalability, reliability, security, and lifelike visual quality that define HD video collaboration experiences.

“Polycom is known for the innovative solutions that have been powering business communications for nearly 20 years and our software is at the core of enabling a highly differentiated customer experience,” said Andy Miller, president and CEO, Polycom. “Moving forward, Polycom software will be increasingly present in desktop, mobile, and social networking platforms as Polycom captures the fast-growing demand for HD video collaboration that is open, secure, and integrated into each customer’s work and social environment. From the cloud infrastructure, to group environments, to the fixed and mobile edge, customers who choose Polycom will have peace of mind knowing that Polycom’s software solutions are a living investment that adapts as needs change without requiring them to rip and replace entire UC structures. By putting the ‘unified’ in unified communications, Polycom offers customers open, best-in-class UC solutions that dramatically change the way they work, communicate, and collaborate.”

Deliver the Most Complete Interoperable Software Platform for Universal Video Collaboration
The first initiative of Polycom’s strategy is to orchestrate broad partner interoperability around Polycom’s open software platform, which has been ongoing for over a year. The Polycom® RealPresence™ Platform (formerly referred to as the UC Intelligent Core™) today delivers the industry’s only universal bridging software that supports up to 75,000 device registrations and 25,000 concurrent sessions (see related release, “Polycom Introduces Polycom® RealPresence™ Platform, the Most Comprehensive Software Infrastructure for Universal Video Collaboration”). The platform includes universal video collaboration, video resource management, virtualization, universal access and security, and video content management.

New Application Programming Interfaces (APIs) to Open the RealPresence Platform
Polycom already provides developer APIs for many of its devices, and will extend its APIs to the Polycom RealPresence Platform to enable developers, partners, and service providers to integrate business applications, billing, scheduling, directory, management, monitoring, and other functions with the platform. Through the APIs, customers can drive deeper integration into their environments, and service providers can differentiate their offerings by customizing solutions with Polycom. The Polycom Developer Program includes full API documentation, sample code, access to test and demo systems, and developer support. Field trials for the Polycom Developer Program are set to begin by year end, with availability projected in 1H 2012.

Expand Ecosystem and Bring HD Video Collaboration to Mobile and Social Platforms
Video is on its way to reaching similar penetration and user numbers as HD audio, and mobile video and social video will dramatically increase this trend. With video cameras embedded in tablets, PCs, and mobile phones, mobile video is already experiencing explosive growth with the number of people participating in video chats forecast to grow 14 times, from about 10 million today to more than 140 million by 2015(1). Social video such as on-demand video chat within business and consumer social sites — has the potential to be the next killer app for social networking.

Polycom’s second strategic initiative is designed to enable customers to communicate securely over video through mobile devices, tablets, and on social platforms. Leveraging the Polycom RealPresence Platform, customers can communicate over video with a tablet back into their enterprise video network to connect with co-workers, customers, or partners.

  • Polycom is working with leading vendors in mobility, including Apple, Motorola, Samsung and HP, to offer enterprise mobile video applications on Android, iOS, and Windows Phone 7 platforms that allow users to participate in multi-point video calls across all open-standard video solutions and devices, including via tablets, desktop video systems, or immersive telepresence rooms.
  • Polycom is integrating HD video solutions into popular social business platforms, and today announced a strategic relationship with Jive to integrate Polycom HD video solutions into Jive’s social business platform, enabling face to face video collaboration (see related announcement from today). The joint solution will allow Jive business customers to conduct live video chats, including group video calls, as well as record video meetings or messages for archiving, training, and ongoing collaboration.

Create the First Open “Video Cloud” Exchange with Service Providers
Cloud delivery of “video as a service” is another key trend driving further adoption of UC services, promising to generate a global network effect for mass video connectivity. Analysts project that 40 percent of enterprises will move to a hybrid cloud, which is a combination of on-premises applications and cloud-delivered services, by 2012. Polycom launched its video cloud initiative in June with:

  • The release of the Polycom RealPresence Platform software that supports carrier-grade UC with the reliability, availability, security, redundancy, and massive scalability required for cloud-delivery.
  • The formation of the Open Visual Communications Consortium™ (OVCC™) organization, a video cloud exchange started by Polycom and 14 of the world’s leading service providers, including AT&T, Verizon, Orange, PCCW, Telefonica, BT, Airtel, and Telstra. The OVCC is dedicated to solving the technical challenges of providing video as a service across multiple carrier networks and technologies, which today is not yet possible, through an open standards-based approach. The OVCC has successfully demonstrated a multi-vendor, high definition telepresence call across 12 service provider networks simultaneously, running on the Polycom RealPresence platform. Polycom and the OVCC organization expect to begin bringing open video exchange cloud services to market as early as mid-2012. The potential result is hundreds of millions of users worldwide connecting over video as easily as dialing their mobile. Additional service providers and value added resellers who provide managed services solutions are expected to join the OVCC group in the coming months.

Continue to Set the Standard with Software Innovations that Deliver Exceptional User Experiences and Transform the Way We Work and Collaborate
With more than 700 patents issued or pending worldwide, Polycom innovations in areas such as codecs, microphone arrays, noise suppression, voice detection, echo cancellation, UC integration, loudspeaker system design, content sharing, and system touch control have set the standard for HD voice, video, and telepresence for more than 20 years. Polycom innovations deliver an exceptional lifelike video collaboration experience that significantly boosts engagement and productivity.

As a recent example, Polycom just introduced the Polycom EagleEye Director, which orchestrates a groundbreaking group video collaboration experience by emulating professional video production techniques to focus on and provide close-up views of each speaker in a video conference using automated camera pan, tilt, and zoom. Polycom software technology inside the system makes it seem like each conference room has a Hollywood director in residence. In addition, the Polycom CX7000 system uses Polycom software innovations in combination with Microsoft to deliver the first room video telepresence solution custom-built for full integration with Microsoft Lync™. The solution takes the Lync experience customers are familiar with on their desktops and recreates it in the conference room to provide a higher level of business collaboration and productivity for room telepresence. As a result of joint development and tight integration between Microsoft and Polycom, customers get an intuitive interface, simplified UC experience, and a seamless HD video experience.

Polycom will continue to invest in delivering the industry’s most lifelike face-to-face collaboration experiences bringing new innovations to market that extend UC interoperability and HD video to new devices and platforms servicing the hundreds of millions of knowledge workers across industry and government.

More at www.polycom.com

This release contains forward-looking statements within the meaning of the “safe harbor” provisions of the Private Securities Litigation Reform Act of 1995 regarding future events, the future demand for and future availability of our products, and the future performance of the Company, including statements regarding Polycom’s software strategy and the future impact it will have on redefining the UC market, accelerating the adoption of Polycom software and establishing Polycom as the default of choice for open UC and HD video collaboration solutions; expected future revenue growth; future product functionality; the increasing presence of Polycom solutions in multiple platforms; Polycom as capturing the fast-growing demand for HD video collaboration solutions; the development of new APIs for the Polycom RealPresence Platform and the future availability of the Polycom Developer Program; the availability of future mobility solutions; the availability of future Polycom HD video applications on social business platforms; the availability of open video exchange cloud services in the market and new user video adoption and service provider participation in the OVCC; and continuing investment and innovation by Polycom. These forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially, including our ability to successfully execute on our software strategy, the impact of competition on our product sales and for our customers and partners; the impact of increased competition due to consolidation in our industry or competition from companies that are larger or that have greater resources than we do; potential fluctuations in results and future growth rates; risks associated with general economic conditions and external market factors; the market acceptance of Polycom’s products and changing market demands, including demands for differing technologies or product and services offerings; possible delays in the development, availability and shipment of new products, increasing costs and differing uses of capital; changes in key personnel that may cause disruption to the business; any disruptive impact to Polycom that may result from the acquisition of HP’s Visual Collaboration Business Unit; the impact of restructuring actions; and the impact of global conflicts that may adversely impact our business. Many of these risks and uncertainties are discussed in the Company’s Quarterly Report on Form 10-Q for the quarter ended June 30, 2011, and in other reports filed by Polycom with the SEC. Polycom disclaims any intent or obligations to update these forward-looking statements.

NOTE: The product plans, specifications, and descriptions herein are provided for information only and subject to change without notice, and are provided without warranty of any kind, express or implied. Polycom reserves the right to modify future product plans at any time. Products and related specifications referenced herein are not guaranteed and will be delivered on a when and if available basis.

Hungary Opens First IPv6 Education Lab at Budapest University of Technology and Economics, Using Cisco Technology

BUDAPEST, Hungary – September 16, 2011 – Cisco and the Budapest University of Technology and Economics (BME) announced the first laboratory for training and research in the new Internet Protocol version 6 (IPv6) in Hungary.

Located at the Department of Telecommunications at the Faculty of Electrical Engineering and Informatics, the lab is an integral part of an international network of IPv6 training and research facilities and is connected to some 20 similar centers around the globe.Cisco donated the lab's networking and communications equipment.

Key Facts / Highlights:

* Operating under the umbrella of the 6DEPLOY-2 project funded by the European Union's Seventh Framework Program, the objective of the lab is to provide an open environment for validating solutions, network setups and applications built on the next-generation Internet Protocol, known as IPv6. Internet experts, including academics, government administrators and telecom specialists, can be trained both on-site and through virtual access in innovative information and communications technology solutions relating to the adoption of IPv6.

* In terms of technology and equipment, the lab is a replica of other IPv6 education labs established under the umbrella of 6DEPLOY-2. The labs are interconnected through GEANT, the pan-European data network dedicated to the research and education communities.The resources can be used redundantly: if someone needs to conduct a test and one of the labs is busy at the time, the researcher can be directed to another lab and use its equipment remotely.

* The laboratory will be managed by the university's Cisco® Networking Academy® team. Itwill also help the university's collaboration with the 6DEPLOY-2 project funded by the European Union's Seventh Framework Program, whose aim is disseminating knowledge about IPv6 and supporting its deployment.

* As the world prepares for the adoption of the next-generation Internet Protocol, education on IPv6 becomes crucial. The IPv6 lab in Budapest offers the opportunity to develop a solid knowledge base on the new protocol, which will help ensure the continuity of operation of institutions and companies.

Backgrounder on IPv6

* Due to the spectacular growth in the adoption of the Internet and Internet-based technologies worldwide, public IP address space is becoming increasingly scarce. The convergence of technologies and the increasing number of devices on the Internet both require new address space.
* According to estimates from Cisco Internet Business Solutions Group (IBSG), by 2020, there will be approximately 50 billion devices connected to the Internet, each of them using one or more IP addresses. In 2008, the number of things connected to the Internet exceeded the number of people on earth. By 2020, this ratio is expected to be 1:6, meaning more than six connected devices for every person on earth.
* IPv6 means a quantum leap compared with IPv4 in terms of available address space. With the IPv6 protocol, there will be enough IP addresses to allocate 100 addresses to every atom on the earth's surface. There could be 4.8 trillion addresses for every star in the known universe.
* In January 2011, the Internet Assigned Numbers Authority (IANA) allocated the last free IPv4 blocks to the Regional Internet Registries (RIRs). APNIC, the Regional Internet Registry covering Asia Pacific started to distribute the last available block on April 18, 2011. RIPE-NCC, covering Europe, Middle East and parts of Central Asia, is expected to do the same in the first quarter of 2012.
* As an early pioneer in IPv6 technology, Cisco has been a driving force in developing IPv6 through various standards bodies, including the Internet Engineering Task Force, and has been shipping a wide variety of end-to-end IPv6 products and solutions.
* BME has been covering IPv6 as part of its education and research portfolio ever since the standard's development has started.
* In June 2011, Cisco participated in World IPv6 Day, a 24-hour global "test drive" of IPv6. In Hungary, several organizations, including BME as well as the national research and education network operated by the National Information Infrastructure Development Institute (NIIFI), successfully participated in World IPv6 Day.

Supporting Quotes:

* Sándor Imre, head of department, BME Department of Telecommunications

"To train internationally competitive Hungarianengineers, it is indispensable to have a world-class technology environment and to focus on cutting-edge R&D topics. The Telecommunications Department of the Budapest University of Technology and Economics, in collaboration with Cisco, has reached a landmark by launching the laboratory focusing on IPv6, the technology which will fundamentally define information technology for the next decade."

* János Mohácsi, deputy director general of NIIFI, a 6DEPLOY-2 project expert and tutor

"The cooperation between NIIFI and Cisco in the IPv6 area started more than 10 years ago. At that time, when launching the 6NET project, it was already clear that the time for changing the Internet protocol will come. The 6NET, 6DISS and 6DEPLOY projects promoted the education of IPv6 professionals worldwide. The objective of the Budapest lab is to initiate a similar development in Hungary, multiplying the number of IPv6 experts. The fact that the IPv6 lab is connected to a Cisco Networking Academy is offering a good synergy."

* Istvan Papp, director, EMEAR Public Sector, Cisco

"The introduction of IPv6 is no longer an option. By providing the lab equipment, we wanted to contribute to the education of IPv6 specialists in Hungary and thus help the country's transition to the new protocol. IPv6 is also paving the way to new technologies such as machine-to-machine communication, mobility or intelligent sensors. Participating in an international network of training and research facilities, the knowledge center at BME will be able to connect to the bloodstream of international innovation based on IPv6."

Leverage the Cloud in the Back Office

By Susan J. Campbell

Taking a platform to the Cloud is a strategy that is gaining more and more attention in the enterprise segment, yet the network back office stands to gain significant value from this migration. It’s no longer enough for the enterprise to drive innovation through cloud-based applications on the customer side of the service provider business; it’s time to take it to the back office.

A recent Alcatel-Lucent Enriching Communications article, Bridging the Cloud to the Network Back Office, explores the fact that cloud services that deliver end-to-end Service Level Agreements (SLAs) to cover the network and IT will differentiate. At the same time, dynamic management for the cloud incorporates a flexible data model and the automation of OSS/BSS. To leverage the most value from these platforms, it’s critical that efficient cloud operations leverage best practices, tools and standards from the IT and telecom worlds.

It’s important that service providers accelerate, differentiate and gain agility as they also scale to offer large volumes of cloud services. New IT services and applications demand faster time-to-market than existing services. Additionally, the cloud environment must address the end-to-end quality of service (QoS) and efficiently handle new business models, increasingly larger volumes of users, a broad range of security requirements and different tiers of service.

With these new requirements, there is a growing need for a symbiotic relationship between the network infrastructure and the cloud when delivering cloud services. To make this relationship a reality for back office support services, it is critical that dynamic management is firmly in place. This dynamic management enables speed, simplification, cost reduction and quality as a result of tools, automation and the use of pre-defined templates to create tiered operations capabilities.

When dynamic management is in place, communication service providers (CSPs) providing cloud services can effectively accelerate the business as intelligence, automation and standardization in the back office is proven to help the CSP to deliver cloud services faster and with fewer hassles, while also driving higher quality and greater visibility.

Dynamic management will also enable the CSP to reduce operating costs as it enables integration and the reuse and streamlining of services to make it easier and more cost effective to integrate new and existing services. Differentiation is also achieved as CSPs combine their network with dynamic management to offer key capabilities to set their services apart from that of the competition.

New partners can be easily supported through the open innovation of the Application Enablement concept as service providers can quickly bring large volumes of cloud services to their customers. The increased business model agility enables these service providers to leverage these new capabilities to quickly discover new revenue streams and greater profitability.

THE NEW JABRA UC VOICE SERIES IS AVAILABLE!


Unified Communications (UC) is expected to grow to 49 million users worldwide by 2015. This is due to more users moving to integrated collaboration and conferencing platforms and enterprise applications that are driving improvements in productivity, reduction in infrastructure costs, and operational efficiency gains.



Jabra’s exciting new line of UC VOICE products are designed to address many different user profiles and applications, such as video conferencing, which will drive the need for professional headsets that are optimized for UC deployments.


Jabra Delivers a Simple Solution for First Time Users
The Jabra UC VOICE headsets are simple to use and operate resulting in faster user adoption and return on your investment. Some of the key highlights include the following:


-Designed for UC voice deployments in office enterprises
-Easy to deploy, use, and maintain
-Superior sound clarity for crystal-clear calling
-Faster user adoption with plug-and-play compatibility
-4 designs to accommodate different user needs and preferences
-Superior quality backed by a 2 year warranty







Model New Part Number MSRP Availability
Jabra UC Voice 150 Duo UC 1599-829-209 $39.00 10/1/11
Jabra UC Voice 150 Duo OC 1599-823-109 $39.00 10/1/11
Jabra UC Voice 150 Mono UC 1593-829-209 $34.00 10/1/11
Jabra UC Voice 150 Mono MS 1593-823-109 $34.00 10/1/11
Jabra UC Voice 250 Mono UC 2507-829-209 $49.00 9/15/11
Jabra UC Voice 250 Mono MS 2507-823-109 $49.00 9/15/11
Jabra UC Voice 550 Duo UC 5599-829-209 $69.00 10/1/11
Jabra UC Voice 550 Duo MS 5599-823-109 $69.00 10/1/11
Jabra UC Voice 550 Mono UC 5593-829-209 $59.00 10/1/11
Jabra UC Voice 550 Mono MS 5593-823-109 $59.00 10/1/11
Jabra UC Voice 750 Duo Light UC 7599-829-209 $89.00 Q112
Jabra UC Voice 750 Duo Light MS 7599-823-109 $89.00 Q112
Jabra UC Voice 750 Mono Light UC 7593-829-209 $79.00 Q112
Jabra UC Voice 750 Mono Light MS 7593-823-109 $79.00 Q112
Jabra UC Voice 750 Duo Dark UC 7599-829-409 $89.00 Q112
Jabra UC Voice 750 Dark Duo MS 7599-823-309 $89.00 Q 112
Jabra UC Voice 750 Mono Dark UC 7593-829-409 $79.00 Q112
Jabra UC Voice 750 Mono Dark MS 7593-823-309 $79.00 Q112

Monday, September 12, 2011

How Rigid “Non‐Living” Storage Fails Cloud Service Providers

And What You Can Do About It

By: Marc Staimer, President & CDS of Dragon Slayer Consulting


Change. The word alone immediately elicits anxiety in IT professionals responsible for their organization’s
computing infrastructure. For the storage administrator, change is to be avoided because in the minds of
many, that’s when things break, especially with SAN, NAS, and Unified storage. And yet, change has
become inevitable in today’s rapidly evolving computing environment. Virtualization; petabytes to
exabytes of stored data; always on 7 by 24 by 365 world economy; rising user expectations; expanding
power and cooling requirements; larger more expensive data centers; and more are stretching and
consistently breaking traditional storage infrastructure. This is compelling IT organizations to look at
Cloud Services as a viable alternative.



Cloud Services provides IT organizations an alternative to traditional computing infrastructure. They no
longer have to be IT experts to have world class IT operations. They can run their business efficiently, do
their jobs, no longer waste precious human cycles on IT, and do so at an equal or lower total cost of
ownership. Cloud Services has shown significant value in cutting application development time, reducing
deployment costs with equivalent or better quality, increasing user control, enhancing governance, faster
time to market, greater flexibility, and lower costs. It is here to stay.



However, as service providers have delivered Cloud Services, they’ve quickly discovered they’re not
immune from the same storage problems affecting their customers. This has caused severe angst and
consternation about the shortcomings of traditional DAS, NAS, and SAN storage systems. These
shortcomings rear their ugly head much sooner for service providers because of the large scale of their IT
ecosystems. It becomes readily apparent that traditional storage systems are incapable of meeting Cloud
Services needs and requirements without throwing large amounts of money and/or people at the
problems, something few service providers have in abundance.



The root cause of these service provider headaches is the extremely rigid approach of traditional storage
systems. They’re designed for different market requirements and candidly a different era in computing.
They are much more manually labor‐intensive requiring extensive storage administrator expertise. This
has been acceptable in the corporate IT world where experts manage the system complexity, capacity,
performance, and operations as well as data protection. Experts are required because these storage
functions must be prognosticated and pre‐planned with amazing accuracy well in advance of need.
Changes to storage capacity or performance not anticipated and accounted for, creates a mad scramble
with potentially dire consequences, in addition to acute levels of stress. Technology refresh is another
extremely time consuming and costly aspect of these legacy rigid storage systems that taxes the Cloud
Services service provider to the breaking point. The legacy storage model just does not work very well or
at all for this new model, especially for the Cloud Services provider.



Storage vendors argue that their systems have evolved with advanced software functionality. Although
true to a point, the end result does not make their systems more alive, or less rigid, or even less
expensive.



The lack of dynamism, adaptability, flexibility, resilience, and built‐in intelligence falls well short of Cloud
Services needs. To top it off, non‐living storage system cost performance is completely out‐of‐line with
Cloud Service provider requirements. They need SAN storage or better performance but at significantly
lower costs. Market conditions require Cloud Services providers to provide their service at a compellingly
lower cost than their customers can provide themselves. With storage being such a huge part of the total
cost of ownership, getting control of the costs are essential. Traditional storage systems just do not allow
the Cloud Services provider to do this.



This white paper will take a deeper look at Cloud Services operational and financial requirements; the
problems legacy non‐living storage systems cause Cloud Service providers; how and why the work
arounds fail; as well as the best way to solve these problems right now.



Deploying Cloud Services with “Non Living” Storage Does Not Work

Cloud Service provider success comes from leveraging the following key
aspects of cloud technology:


• Multi tenant

• Highly Continuously Scalable From Very Small to Very Large

• Pay for Use Paradigm

• Loosely Coupled Service

• Transparent data resilience and security

• Simple to Implement, Operate, Manage



This is what defines Cloud Services technologies and where traditional storage systems come up short.
Financially, traditional storage systems come up even shorter.



Common sense makes it clear that Cloud Service providers must provide an equivalent or better level of
service IT organizations are accustomed to by delivering the services themselves. But, Cloud Service
providers need considerable economies of scale to be able to provide those services at a cost point that
enables them to make a profit. Storage is a huge part of the Cloud Service providers’ cost. Regrettably,
traditional storage never gets to those necessary economies of scale. At least not to the point that makes
those Cloud Services cost effective and compelling. A deeper examination of the requirements shows
why.



Multi tenant means that in a shared Cloud Services environment, a
specific client is the only one who can ever see or access his or her
own data. Neither service provider employees nor other clients can
ever deliberately or accidentally access their data. This requires both
application and storage multi tenancy. It is the storage multi‐tenancy
that is bit more complicated.



Encryption methods require key management by the application or
user. Integrated charge‐back billing for the storage resources actually
utilized is an anathema to most traditional storage systems. Virtual
namespaces within the storage system are a must so that resellers can
provide unique services without having to have completely separated costly storage infrastructures. Once
again this is rare for traditional storage systems.




This lays the responsibility completely on the service provider. They have to go to a lot of trouble
time developing, documenting, supporting, fixing, and patching a customized application to overcome
shortfall. Quality assurance is a challenge as systems change or software on those systems change.



Legacy storage systems were just not architected for multi tenancy. With a lot of work, effort, and cost,
multi tenancy can be bolted on, but that work is not a one time effort. The work is ongoing. More
importantly, the result tends to not be adaptable to the constantly changing customer base and
requirements of the Cloud Service provider. In other words, this is not a viable long term solution.



Highly Scalable From Very Small to Very Large



It is the ability to scale to extraordinary levels, from dozens of petabytes to exabytes, that makes the
Cloud Services business model viable. It gives service providers economies of scale enabling them to provide a compelling business case for their models. One of the key components to a Cloud service’s
scalability is the storage.



Ask any storage vendor if their storage is highly scalable and they will vigorously say yes. What does
“highly scalable” mean? How is it defined? More often than not it is a matter of degree directly affected
by experience and organizational requirements. What is highly scalable for an Enterprise is not even close
for a Cloud Services operation and overkill for a SMB.



Pay For Use Paradigm



Cloud Services are marketed and purchased as an operating expense. Pricing is on a per use and/or user
basis. Cloud storage is also priced the same way on per GB used per month basis. Customers/clients only
pay for the resources they utilize. This is a complete paradigm shift from the way legacy storage is
marketed and purchased. The current traditional storage model prices all of the storage required over a
three or fouryear period, plus all of the software, and even the maintenance, bundled in one up front
price. In addition, these storage systems calculate costs on raw storage, not usable. It is a misalignment
of models like trying to put a square peg in a round hole.



That traditional purchasing model places all of the risk on the Cloud
Service providers. First the service provider must accurately forecast
how much storage they will need over a specific timeframe. This is
exceedingly difficult for service providers that have a “lumpy”
customer and revenue stream. Then they have to hope they can sell
adequate services to cover the costs of their upfront storage
investment. The storage vendors share none of the risk.
This legacy model is incompatible with the goto market strategy of
the Cloud Service provider. It is not necessarily a death knell for their business; however, it places it in
significant jeopardy.



Loosely Coupled Service



Loosely coupled services are designed to leverage commodity components. By definition a loosely
coupled service has no dependencies on specific hardware enabling movement or changes transparently,
easily, and without disruptions. This concept is at the core of Cloud Services. It allows the service
provider to use low‐cost highly reliable commodity components to provide a highly resilient, high
performance service. That ability to provide application continuity even as hardware is being upgraded,
replaced, or modified is one of the key selling points for Cloud Services. No user downtime is the key.



Legacy storage is the antithesis to being a loosely coupled service. Data is tightly coupled to systems,
volumes, file systems, drives, name spaces, etc., that creates a highly deterministic, manually laborintensive
environment. Increased labor costs are not the correct recipe for a successful Cloud Services
business. Even worse is that the tight coupling is to high cost proprietary hardware and software that
locks in the customer unless they can tolerate the very high cost of moving away from the proprietary,
rigid inflexible architectures.



This is especially unfortunate during the hardware and/or software refresh cycles. Replacing and
refreshing legacy storage systems are terribly application disruptive. It’s also a major expensive time sink.
There are 43 distinct manually labor‐intensive steps with multiple tasks within each step to replace and
migrate a SAN storage system (39 for NAS). Many of these steps, such as server remediation, are
intensely error prone. Data can be and often is corrupted. And all of these steps require huge amounts of
time, people, coordination, cooperation, and communication. Time that’s measured in months even
years, not hours or days. Add in the cost of professional services, plus the storage system overlap costs
(both storage systems on the floor, powered, and paid for at the same time with the use of only one), and
the fact that this refresh cycle takes place every 3 to 4 years, creates a cost model that is untenable for
Cloud Services.



Transparent Data Resilience



The trade press and Internet have numerous headline grabbing stories about Cloud Service outages or
breeches. These headlines are always followed by speculative blogs about whether or not Cloud Services
will survive. This is a common occurrence and standard operating procedure for all paradigm shifting
technologies. It occurred with SAN storage, NAS, the World-Wide-Web, and now Cloud Services.



All IT systems fail. When they fail in a private data center with limited visibility to the outside world, there
are no headlines. Private data center system outages and security breaches occur far more frequently
than is common knowledge and far less frequently than public Cloud Services. Market perception
historically lags technical reality. Nevertheless, Cloud Service providers strive to mitigate or eliminate any
outages or security breaches because in the market perception is reality. A simple Internet search on the
root cause of many of these very public failures shows human error as the primary cause. Examples of
these errors include a misconfiguration, inappropriate parameter setting, false assumption, incorrect
policy, and more. These are manually labor-intensive tasks. These are exactly the same kinds of tasks
that are so prevalent with legacy storage systems.



It is incredibly difficult for Cloud Service providers to minimize outages and security breaches when
utilizing legacy storage. Legacy storage increases opportunities for these outages and security breaches
because of the increase potential of human error.



Cloud Services require transparent data resiliency (no noticeable declines in performance or access) to the
customer data availability even when data is lost, corrupted, or destroyed for whatever reason.



Legacy storage systems have historically been anything but simple or
intuitive. They are getting better. Driven by customers that lack the
expertise that was expected in the past, these systems have overlaid multiple layers of management and
functionality to simplify implementations, operations, and management. However, these layers are like a
Russian Matryoshka or Babushka doll (Russian nesting doll) when it comes to making changes. To be
simple means to keep it simple or static. Dynamic changes are often challenging and difficult, if at all
possible, require extensive expertise and/or expensive professional services, and do not happen in real
time.




Simple to Implement, Operate, and Manage


Putting storage expertise into the storage system instead of the
administrator reduces workloads, tasks, errors, time, expense, and
people. As sensible as this may be, it is a hard and fast requirement for
Cloud Services. Lean and flexible is the name of the game when it comes
to Cloud Services.



Cloud Services are never static. Put bluntly, legacy storage systems cannot meet Cloud Services
requirements.



Real World Work‐Arounds and Why They’re Inadequate




It’s human nature upon discovering difficult problems to figure out a solution or work‐around. And there
are a number of common workarounds that Cloud Services storage admins attempt to implement with
the legacy storage problem.



For multi-tenant billing they try utilizing third party software or write their own scripts. But as pointed
out previously, these efforts often fail in a sea of frustration from lack of ongoing QA, documentation and
their inherent inflexibility.



Another work-around for multi-tenant security is drive encryption. Drive encryption does not prevent
different customers or services from having access to the drive. It only means data written to the drive is
encrypted at rest. If the drive or drives are accessible through the storage system, the data can be
accessed.



The usual workarounds for capacity scaling issues, and to lesser extents object scaling and performance
scaling, is to go to a storage systems sprawl scheme or scale out. As previously discussed, storage system
sprawl is a non-starter for Cloud Service providers. Scale‐out storage, especially scale‐out NAS, have been
gaining in popularity; however, scale‐out storage has its issues. First, each additional node in the cluster
has diminishing marginal returns meaning the capacity, performance, and object gains are less than the
node before. Eventually, additional nodes reduce scalability. Capacity typically tops out in low double
digit PBs, a couple of orders of magnitude too low. Objects are still an issue typically topping off in dozens
of millions, not billions. And the total cost of ownership is still too high for ultimate Cloud Service
provider success.



The workarounds for performance scaling also include utilization of FLASH SSD as cache or as Tier 0
storage tiers and HDD Short-stroking as a cheaper alternative. These workarounds do scale performance
within the limitations of the storage system at a very steep price making the price performance too high
for Cloud Services providers to be cost competitive.



A common pay‐per use paradigm work‐around is to lease. Leasing turns all that upfront CapEx into a
monthly payment. It does not reduce risk and in fact ultimately costs more. Leasing has multiple
components: the residual value of the storage system at the end of the lease period; the amount to be
financed (difference between sale price and residual); the interest on the residual value during the lease
period; fees for leasing; and the potential penalties for running long on the lease because of data
migration timeframe issues. Leasing is not a good alternative to pay‐per‐use.



To limit the negative legacy storage system tight coupling impact many storage admins implement server
virtualization with a storage virtualization layer. This allows a bit more flexibility by loosening the bonds
between the application and the data. It does so at a very steep cost that requires duplicate identical
storage systems, multiple copies of the data on tier 1 expensive storage, and redundant supporting
infrastructure.



Working around the transparent data resilience issues is more complicated and more expensive. It comes
down to copy and replicate with multiple copies of the data on multiple legacy storage systems. But
creating the transparency to the multiple copies is a bit more difficult. When data is lost, corrupted,
misplaced or deleted, the alternative copy or copies must be mounted on the application (NAS) file
system or pointed to the correct LUN (SAN or DAS). Both are manual labor‐intensive tasks and far from
transparent. A costly exercise in time, people, infrastructure, and storage costs.



Managing around legacy storage system complexity usually means homegrown scripts that are rarely
documented; QA’d; updated; or kept up with ongoing system changes. They tend to be inflexible with a
limited shelf life. As personnel leave they have to be completely rewritten.



Organic Storage Meets or Exceeds Requirements



Organic Storage is living storage. It mimics the way life adapts to a constantly changing analog world. It’s
a loosely couple grid of self‐contained nodes each with shared processing, IO, and storage. Nodes are
interconnected on a TCP/IP Ethernet network allowing equal access and allocation of resources on
demand. By leveraging object storage (blocks or files stored with their metadata and index as a single
object), Organic Storage is able to scale to unprecedented amounts of capacity (hundreds of petabytes to
exabytes), objects (many billions), and performance (tens to hundreds of millions of IOPS). Organic
Storage has numerous significant advantages to Cloud Service providers. Some advantages include:



• Built‐in multi‐tenancy and security.

• Adapts in real‐time to changing demands, loads, and performance.

• Distributes over many independent commodity, off-the-shelf commodity components.

• Self heals at extraordinary new levels of resiliency that handles failures from any component or
multiple components with no material impact on performance, functionality, or availability
because the Organic Storage software is loosely coupled with the hardware.

• Scales capacity, objects, and/or performance in small or large increments without limitations.

• Adds additional nodes online that always positively increase capacity and performance without
ever stopping the system.

• A pay-per-use licensing paradigm. Licensed on utilized storage on a per month basis sharing the
risk with the Cloud Service provider.

• Refreshes storage in a manner analogous to the way organic systems replace their cells:
Continuously, progressively, transparently, without application disruption.



This makes Organic Storage ideal for Cloud Services providers. It is the only type of storage that meets or
exceeds all Cloud Service operational and financial requirements.



Scality RING: The Leading Organic Storage System



Scality RING was architected from the ground up to be Organic Storage and exceed Cloud Service
requirements. It is analogous to an organic autonomic nervous system actively and adaptively managing
the what, where, when, why, and how of storage and retrieval without human (conscious) intervention.
Scality Ring leverages its unique industry hardened peer-to-peer technology to provide carrier‐grade
service availability and data reliability.



RING Organic Storage is made up of standard off‐the‐shelf
commodity server nodes. Each node on the RING is responsible for
its own piece of the overall storage puzzle. Every node is
functionally equivalent. A node can fail, be removed, upgraded, or
just new ones added and the system will rebalance automatically
without human intervention. This makes technology refresh a
simple, online process with no application disruptions eliminating
data migration, long nights, and sleepless weekends.



There are no requirements for a master database or directory. The
Ring utilizes instead an incredibly efficient Distributed Hash Table
(DHT) algorithm that consistently maps a particular key to a set of
storage nodes. DHTs provide the lookup service similar to a hash table. Key, value pairs are stored in the
DHT that any and all participating node retrieve the value associated with a given key. Keys embed
information about Class Of Service. Each node is autonomous and responsible for checking consistency
and rebuilding replicas automatically for its keys. Responsibility for maintaining the mapping from keys to
values is distributed among the nodes, in such a way that a change in the set of participants causes a
minimal amount of disruption. This allows DHTs to scale to extremely large numbers of nodes and to
handle continual node arrivals, departures, and failures.



The DHT decentralization is the key
to consistent performance that
scales linearly. The nodes
collectively form the system without
any central coordination,
bottlenecks, or single points of
failure. This provides performance
that rivals the fastest SANs (without
any SAN complexity or cost) even
though the applications
interconnect to the RING are the
very simple standardized REST.
Loads are always evenly distributed
and balanced between nodes. DHT
decentralization also enables unrivaled capacity and object scalability. RING scales from dozens to
thousands of nodes, tens of petabytes to exabytes, and billions of objects.



RING’s DHT comes with unsurpassed built-in system data resilience similar to an organic immune system.
Every node constantly monitors a limited number of its peers, and
automatically rebalances replicas and load to make the system
completely autonomically self-healing without human
intervention. Consistent hashing guarantees that only a small
subset of keys is ever affected by a node failure or removal. The
result is a very high level of fault tolerance because the system
stays reliable even with nodes continuously joining, leaving, or
failing.



Advanced key calculation algorithms allow modeling any kind of
geographically aware replication deployment. Data can be spread
across racks or data centers to follow business policies even in the
case of server failures.



In addition to meeting or exceeding Cloud Service provider
operational requirements, Scality RING is software that licenses on
a pay‐per‐use model. Service providers pay on a used capacity
basis, not raw. And because Scality RING utilizes standard off‐the‐shelf commodity servers obtainable
from the service provider vendor of choice, it always provides the lowest possible TCO.



Conclusion


The requirements of Cloud Services are unique. Legacy rigid deterministic storage does not and cannot
meet these requirements. What’s needed is Organic Storage that mimics the way the living adapts to
ever changing conditions while matching Cloud Service provider business models.