Wirevolution

Enterprise Mobile Security

Subscribe!

Archive for the ‘manageability’ Category

ITExpo: Anatomy of Enterprise Mobility: Revolutionizing the Mobile Workforce

Sunday, September 9th, 2012

If you are going to ITExpo West 2012 in Austin, make sure you attend my panel on this topic at 10:00 am on Friday, October 5th.

The panelists are Brigitte Anschuetz of IBM, Akhil Behl of Cisco Systems, John Gonsalves of Symphony Teleca Corporation, Sam Liu of Partnerpedia and Bobby Mohanty of Vertical.

The pitch for the panel is:

Enterprise mobility is one of the fastest growing areas of business, allowing companies to virtually connect with customers and employees from anyplace in the world. CIOs are facing more decisions than ever when it comes to managing their mobile workforce. Employees expect to be able to do their work on multiple platforms, from desktops and laptops to tablets and smartphones.

This session will dive into the various components of an enterprise mobility solution, provide best practices to ensure they are successful and explain how they integrate together to enable companies to grow their business. Topics will include: mobile enterprise application platforms, enterprise app stores, mobile device management, expense management, and analytics.

ITExpo: BYOD – The New Mobile Enterprise

Sunday, September 9th, 2012

If you are going to ITExpo West 2012 in Austin, make sure you attend my panel on this topic at 1:30 pm on Wednesday, October 3rd.

The panelists are Jeanette Lee of Ruckus Wireless, Ed Wright of ShoreTel and John Cash of RIM.

The pitch for the panel is:

BYOD (Bring Your Own Device) has been in full swing for a couple of years now, and there’s no going back. Enterprises have adopted a policy of allowing users to use their own devices to access corporate networks and resources. With it comes the cost savings of not having to purchase as many mobile devices, and user satisfaction increases when they are able to choose their preferred devices and providers (and avoid having to carry multiple devices). But the benefits don’t come without challenges — the user experience must be preserved, security policies must accommodate these multiple devices and operating systems, and IT has to content with managing applications and access across different platforms. This session looks at what businesses can do to mitigate risks and ensure performance while still giving your users the device freedom they demand.

Mobile Virtualization

Saturday, February 18th, 2012

According to Electronista, ARM’s next generation of chips for phones and tablets should start shipping in devices at the end of this year.

These chips are based on ARM’s big.LITTLE architecture. big.LITTLE chips aren’t just multi-core, they contain cores that are two different implementations of the same instruction set: a Cortex A7 and one or more Cortex A15s. The Cortex A7 has an identical instruction set to the A15, but is slower and more power efficient – ARM says it is the most power-efficient processor it has ever developed. The idea is that phones will get great battery life by mainly running on the slow, power-efficient Cortex A7, and great performance by using the A15 on the hopefully rare occasions when they need its muscle. Rare in this context is relative. Power management on modern phones involves powering up and powering down subsystems in microseconds, so a ‘rarely’ used core could still be activated several times in a single second.

The Cortex A15 and the Cortex A7 are innovative in another way, too: they are the first cores based on the ARMv7-A architecture. This is ARM’s first architecture with hardware support for virtualization.

Even without hardware support, virtualization on handsets has been around for a while; phone OEMs use it to make cheaper smartphones by running Android on the same CPU that runs the cellular baseband stack. ARM says:

Virtualization in the mobile and embedded space can enable hardware to run with less memory and fewer chips, reducing BOM costs and further increasing energy efficiency.

This application, running Android on the same core as the baseband, does not seem to have taken the market by storm. I presume because of performance. Even the advent of hardware support for virtualization may not rescue this application, since mobile chip manufacturers now scale performance by adding cores, and Moore’s law is rendering multicore chips cheap enough to put into mass-market smartphones.

So what about other applications? The ARM piece quoted above goes on to say:

Virtualization also helps to address safety and security challenges, and reduces software development and porting costs by man years.

In 2010 Red Bend Software, a company that specializes in manageability software for mobile phones, bought VirtualLogix, one of the three leading providers of virtualization software for phones (the other two are Trango, bought by VMWare in 2008 and OK Labs.)

In view of Red Bend’s market, it looks as if they acquired VirtualLogix primarily to enable enterprise IT departments to securely manage their employees’ phones. BYOD (Bring Your Own Device) is a nightmare for IT departments; historically they have kept chaos at bay by supporting only a limited number of devices and software setups. But in the era of BYOD employees demand to use a vast and ever-changing variety of devices. Virtualization enables Red Bend to add a standard corporate software load to any phone.

This way, a single phone has a split personality, and the hardware virtualization support keeps the two personalities securely insulated from each other. On the consumer side, the user downloads apps, browses websites and generally engages in risky behavior. But none of this impacts the enterprise side of the phone, which remains secure.

Third Generation WLAN Architectures

Thursday, October 21st, 2010

Aerohive claims to be the first example of a third-generation Wireless LAN architecture.

  • The first generation was the autonomous access point.
  • The second generation was the wireless switch, or controller-based WLAN architecture.
  • The third generation is a controller-less architecture.

The move from the first generation to the second was driven by enterprise networking needs. Enterprises need greater control and manageability than smaller deployments. First generation autonomous access points didn’t have the processing power to handle the demands of greater network control, so a separate category of device was a natural solution: in the second generation architecture, “thin” access points did all the real-time work, and delegated the less time-sensitive processing to powerful central controllers.

Now the technology transition to 802.11n enables higher capacity wireless networks with better coverage. This allows enterprises to expand the role of wireless in their networks, from convenience to an alternative access layer. This in turn further increases the capacity, performance and reliability demands on the WLAN.

Aerohive believes this generational change in technology and market requires a corresponding generational change in system architecture. A fundamental technology driver for 802.11n, the ever-increasing processing bang-for-the-buck yielded by Moore’s law, also yields sufficient low-cost processing power to move the control functions from central controllers back to the access points. Aerohive aspires to lead the enterprise Wi-Fi market into this new architecture generation.

Superficially, getting rid of the controller looks like a return to the first generation architecture. But an architecture with all the benefits of a controller-based WLAN, only without a controller, requires a sophisticated suite of protocols by which the smart access points can coordinate with each other. Aerohive claims to have developed such a protocol suite.

The original controller-based architectures used the controller for all network traffic: the management plane, the control plane and the data plane. The bulk of network traffic is on the data plane, so bottlenecks there do more damage than on the other planes. So modern controller-based architectures have “hybrid” access points that handle the data plane, leaving only the control and management planes to the controller device (Aerohive’s architect, Devin Akin, says:, “distributed data forwarding at Layer-2 isn’t news, as every other vendor can do this.”) Aerohive’s third generation architecture takes it to the next step and distributes control plane handling as well, leaving only the management function centralized, and that’s just software on a generic server.

Aerohive contends that controller-based architectures are expensive, poorly scalable, unreliable, hard to deploy and not needed. A controller-based architecture is more expensive than a controller-less one, because controllers aren’t free (Aerohive charges the same for its APs as other vendors do for their thin ones: under $700 for a 2×2 MIMO dual-band 802.11n device). It is not scalable because the controller constitutes a bottleneck. It is not reliable because a controller is a single point of failure, and it is not needed because processing power is now so cheap that all the functions of the controller can be put into each AP, and given the right system design, the APs can coordinate with each other without the need for centralized control.

Distributing control in this way is considerably more difficult than distributing data forwarding. Control plane functions include all the security features of the WLAN, like authentication and admission, multiple VLANs and intrusion detection (WIPS). Greg Taylor, wireless LAN services practice lead for the Professional Services Organization of BT in North America says “The number one benefit [of a controller-based architecture] is security,” so a controller-less solution has to reassure customers that their vulnerability will not be increased. According to Dr. Amit Sinha, Chief Technology Officer at Motorola Enterprise Networking and Communications, other functions handled by controllers include “firewall, QoS, L2/L3 roaming, WIPS, AAA, site survivability, DHCP, dynamic RF management, firmware and configuration management, load balancing, statistics aggregation, etc.”

You can download a comprehensive white paper describing Aerohive’s architecture here.

Motorola recently validated Aerohive’s vision, announcing a similar architecture, described here.

Here’s another perspective on this topic.

A not so perfect Storm

Wednesday, December 10th, 2008

The Verizon Storm may be heading for failure in more than one way. A raft of reviewers, led by David Pogue of the New York Times are trashing its usability. This means that even with the marketing might of Verizon behind it it may not fulfill its goal of being a bulwark against the iPhone in the enterprise.

But the Storm was an experiment in another way by Verizon. The other three major American mobile network operators have capitulated to Wi-Fi in smartphones. Against the new conventional wisdom, Verizon decided to launch a new flagship smartphone without Wi-Fi. The Storm looks like a trial balloon to see whether Wi-Fi is optional in modern smartphones. If the Storm is a success, it will demonstrate that it is possible to have credible business smartphones without Wi-Fi. But if it turns out to be a flop because of other factors, it will not be a proof point for Wi-Fi either way.

But Wi-Fi is a closed issue by now for all the network operators, perhaps even including Verizon. Phones have lead times of the order of a year or so, and controversies active back then may now be resolved. Verizon covered its bets by launching three other smartphones around the same time as the Storm, all with Wi-Fi (HTC Touch Pro, Samsung Omnia, Samsung Saga).

Before its launch, AT&T hoped that the iPhone would stimulate use of the cellular data network. It succeeded in this, so far beyond AT&T’s hopes that it revealed a potential problem with the concept of 3G (and 4G) data. The network slows to a crawl if enough subscribers use data intensively in small areas like airports and conferences. Mobile network operators used to fear that if phones had Wi-Fi subscribers would use it instead of the cellular data network, causing a revenue leak. AT&T solved that problem with the iPhone by making a subscription to the data service obligatory. T-Mobile followed suit with the Google phone. So no revenue leak. With the data subscription in hand, Wi-Fi is a good thing for the network operators because it offloads the 3G network. In residences and businesses all the data that goes through Wi-Fi is a reduction in the potential load on the network. In other words, a savings in infrastructure investment, which translates to profit. This may be some of the thinking behind AT&T’s recent acquisition of Wayport. The bandwidth acquired with Wayport offloads the AT&T network relatively cheaply. AT&T’s enthusiasm for Wi-Fi is such that it is selling some new Wi-Fi phones without requiring a data subscription.

The enterprise market is one that mobile network operators have long neglected. It is small relative to the consumer market, and harder to fit into a one-size-fits-all model. Even so, in these times of scraping for revenue in every corner, and with the steady rise of the Blackberry, the network operators are taking a serious look at the enterprise market.

The device manufacturers are way ahead of the network operators on this issue: the iPhone now comes with a lot of enterprise readiness Kool-Aid; Windows Mobile makes manageability representations, as does Nokia with its Eseries handsets. RIM, the current king of the enterprise smartphone vendors also pitches its IT-friendliness.

Wi-Fi in smartphones has benefits and drawbacks for enterprises. One benefit is that you have another smart device on the corporate LAN to enhance productivity. A drawback is that you have another smart device on the corporate LAN ripe for viruses and other security breaches. But that issue is mitigated to some extent if smartphones don’t have Wi-Fi. So it’s arguable that the Storm may be more enterprise-friendly as a result of its lack of Wi-Fi. Again, if the Storm becomes a hit in enterprises that argument will turn out to hold water. If the Storm is a flop for other reasons, we still won’t know, and it will have failed as a trial balloon for Wi-Fi-less enterprise smartphones.

Ask and ye shall receive

Friday, March 7th, 2008

Ken Dulaney, Gartner VP distinguished analyst and general mobile device guru, told the crowd at the Gartner Mobile & Wireless Summit today that he still can’t recommend businesses adopt the iPhone — even with an SDK. Dulaney said that he recently wrote Apple a letter in which he outlined several things Apple would need to do with the iPhone before Gartner could change its mind about it. The directives included:
– Permit the device to be wiped remotely if lost or stolen
– Require strong passwords
– Stop using iTunes for syncing with a computer
– Implement full over-the-air sync for calendar and PIM

Jason Hiner, TechRepublic March 5th, 2008

On the same day Dulaney said this in Chicago, Phil Schiller of Apple was holding a news conference in Santa Clara granting some of these wishes, and many more:

  • Microsoft Exchange support with built-in ActiveSync.
  • Push email
  • Push calendar
  • Push contacts
  • Global address lists
  • Additional VPN types, including Cisco IPsec VPN
  • Two-factor authentication, certificates and identities
  • Enterprise-class Wi-Fi, with WPA2/802.1x
  • Tools to enforce security policies
  • Tools to help configure thousands of iPhones and set them up automatically
  • Remote device wiping

At the news conference Apple wheeled out several corporate endorsers: Genentech, Stanford University, Nike and Disney.

At first blush, the new enterprise-oriented capabilities of the iPhone appear to be an IT manager’s dream come true (though it will be a while before the dream is a reality.) Even this contrarian post concedes that it will make the iPhone more competitive with the Blackberry, while faulting Apple for not having a comprehensive enterprise strategy.

Apple is clearly serious about the enterprise smartphone market, and this strategy is sound. The business market supports price points that easily accommodate the iPhone, and this strategy spills over to the business PC market in two ways: today by acting as a door-opener for Mac sales, tomorrow by evolving the iPhone into a PC replacement for many users.

iPhone 3G, SDK, enterprise orientation

Sunday, March 2nd, 2008

UBS thinks that the 3G iPhone will be released mid-year. iLounge reports that the much-anticipated iPhone SDK will be delivered in June, at Apple’s Worldwide Developer Conference. A beta version will be released at an announcement event on March 6th.

There are several reports that Apple intends to target business users with the iPhone, competing with Blackberries, Nokia’s Eseries and Windows Mobile devices. Since the SDK reportedly will expose interfaces to the phone and Wi-Fi, developers of Wi-Fi soft-phones and enterprise Fixed-Mobile Convergence systems will presumably add iPhone support to their existing Symbian and Windows-supporting products. It remains to be seen how easy it will be for developers to actually get their software “officially” onto the iPhone. Apple can choose their degree of open-ness from a variety of options discussed here.

For Apple to aim at the business market makes a lot of sense. With the successful transition to Intel processors Macs already run Windows natively, and iPhones are supposedly making inroads among executives. According to ChangeWave, summarized here, the iPhone has a 5% share of corporate smartphones already, with astronomical ratings for satisfaction.

To make enterprise IT departments happy, though, Apple will have to make the iPhone more manageable; either by building in OMA DM like Nokia with the Eseries, or by letting third parties develop enterprise manageability clients using the iPhone SDK.

Competitors aren’t sitting still for this. The October 2007 announcement of “Microsoft System Center Mobile Device Manager” was a step forward for Windows Mobile in the enterprise. Microsoft is also leaking stories about how when Windows Mobile 7 is released in 2009 it is going to be more of a pleasure to use than the iPhone. It is conceivable, I suppose, but Microsoft’s track record on usability is pretty consistent. The fundamental part that they invariably seem to get wrong is instant response to user input.

Reliable VoIP

Friday, September 14th, 2007

QoS metrics are important, and several companies have products that measure packet loss, jitter, latency and so on. But you can have perfect QoS, and your VoIP system can still be defective for all sorts of reasons.

I spoke with Gurmeet Lamba, VP of Engineering, at Clarus Systems at the Internet Telephony Expo this week. He said that even if a VoIP system is perfectly configured on installation, it can decay over time to the point of unusability. Routers go down and are brought up again with minor misconfigurations; moves, adds and changes accumulate bad settings and policy violations.

VoIP systems are rarely configured perfectly even on installation. For example, IP phones have built-in switches so you can plug your PC into your desk phone. Those ports are unlocked by default. But some phones are installed in public areas like lobbies. It’s easy for installers to forget to lock those ports, so anybody sitting in the lobby can plug their laptop into the LAN. There are numerous common errors of this kind. Clarus has an interesting product that actively and passively tests for them; it monitors policy compliance and triggers alarms on policy violations.

Clarus uses CTI to do active testing of your VoIP system, looking for badly configured devices and network bottlenecks. Currently it works only on Cisco voice networks, but Clarus plans to support other manufacturers.

Clarus started out focusing on automated testing of latency, jitter and packet loss for IP phone systems. It went on to add help desk support with remote control of handsets, and the ability to roll back phone settings to known good configurations.

The next step was to add “Business Information,” certifying deployment configurations, and helping to manage ongoing operations with change management and vulnerability reports. Clarus’ most recent announcement added passive monitoring based on a policy-based rules engine.

Clarus claims to have tested over 350 thousand endpoints to date. It has partners that offer network monitoring services.

The hidden telecom money-drain

Tuesday, March 6th, 2007

Corporations have trimmed their telecommunications expenses down to the bone, but there is still a huge telephone-related money leak at most companies. This is the cell phone bill that so many employees simply expense, so it isn’t a controlled part of the IT department budget.

Businesses are finding it increasingly urgent to manage their cell phones the way they manage their network and computer equipment. One company seeking to help with this is IntegratedMobile.
But bringing cell phones under the corporate manageability umbrella is just the first step in integrating them into the IT strategy. The next step is to treat them the same way as regular phones for their voice capabilities, and as laptops for their data capabilities.

Treating phones the same way we treat laptops would call for a standard managed corporate software build with a common image worldwide.
Treating cell phones the same way we treat corporate desk phones would call for them to have all the standard PBX phone features, and to be administered on the PBX the same way desk phones are.

The trouble with this vision is that there is no worldwide mobile network operator (MNO), and in the US the phones are normally bought through the MNO. So the phone doesn’t exist that can be centrally bought, has a common manageable image installed, and can be deployed worldwide.