January 14, 2014
My thoughts on network neutrality can be found here and some predictions contingent on its loss here, so obviously I am disheartened by this latest ruling. The top Google hit on this news is currently a good story at GigaOm, and further down Google’s hit list is a thoughtful article in Forbes, predicting this result, but coming to the wrong conclusion.
I am habitually skeptical of “slippery slope” arguments, where we are supposed to fear something that might happen, but hasn’t yet. So I sympathize with pro-ISP sentiments like that Forbes article in this regard. On the other hand, I view businesses as tending to be rational actors, maximizing their profits under the rules of the game. If the rules of the game incent the ISPs to move in a particular direction, they will tend to move in that direction. Because competition is so limited among broadband ISPs (for any home in America there are rarely more than two options, regardless of the actual number of ISPs in the nation), they are currently incented to ration their bandwidth rather than to invest in increasing it. This decision is a push in that same direction.
Arguably the Internet was born of Federal action that forced a corporation to do something it didn’t want to do: without the Carterfone decision, there would have been no modems in the US. Without modems, the Internet would never have gotten off the ground.
Arguments that government regulation could stifle the Internet miss the point that all business activity in the US is done under government rules of various kinds: without those rules competitive market capitalism could not work. So the debate is not over whether the government should ‘interfere,’ but over what kinds of interference the government should do, and with what motivations. I take the liberal view that a primary role of government is to protect citizens from exploitation by predators. I am an enthusiastic advocate of competitive-market capitalism too, where it can exist. The structure of capitalism pushes corporations to charge as much as possible and provide as little as possible for the money (‘maximize profit’). In a competitive market, the counter-force to this is competition: customers can get better, cheaper service elsewhere, or forgo service without harm. But because of the local lack of competition, broadband in the US is not a competitive market, so there is no counter-force. And since few would argue that you can live effectively in today’s US without access to the Internet, you can’t forgo service without harm.
The current rules of the broadband game in the US have moved us to a pathetically lagging position internationally so it seems reasonable to change them. Unfortunately this latest court decision changes them in the wrong direction, freeing ISPs to ration and charge more for connectivity rather than encouraging them to invest in bandwidth. If you agree that this is a bad thing, you can do some token venting here: http://act.freepress.net/sign/internet_FCC_court_decision2/
Here is a press release from an organization that few people could find fault with.
November 13, 2013
This piece from the Aberdeen Group shows accelerometers and gyroscopes becoming universal in smartphones by 2018.
Accelerometers were exotic in smartphones when the first iPhone came out – used mainly for sensing the orientation of the phone for displaying portrait or landscape mode. Then came the idea of using them for dead-reckoning-assist in location-sensing. iPhones have always had accelerometers; since all current smartphones are basically copies of the original iPhone, it is actually odd that some smartphones lack accelerometers.
Predictably, when supplied with a hardware feature, the app developer community came up with a ton of creative uses for the accelerometer: magic tricks, pedometers, air-mice, and even user authentication based on waving the phone around.
Not all sensor technologies are so fertile. For example the proximity sensor is still pretty much only used to dim the screen and disable the touch sensing when you hold the phone to your ear or put it in your pocket.
So what about the user-facing camera? Is it a one-trick pony like the proximity sensor, or a springboard to innovation like the accelerometer? Although videophoning has been a perennial bust, I would argue for the latter: the you-facing camera is pregnant with possibilities as a sensor.
Looking at the Aberdeen report, I was curious to see “gesture recognition” on a list of features that will appear on 60% of phones by 2018. The others on the list are hardware features, but once you have a camera, gesture recognition is just a matter of software. (The Kinect is a sidetrack to this, provoked by lack of compute power.)
In a phone, compute power means battery-drain, so that’s a limitation to using the camera as a sensor. But each generation of chips becomes more power-efficient as well as more powerful, and as phone makers add more and more GPU cores, the developer community delivers new useful uses for them that max them out.
Gesture recognition is already here with Samsung, and soon every Android device. The industry is gearing up for innovation in phone based computer vision with OpenVX from Khronos. When always-on computer vision becomes feasible from a power-drain point of view, gesture recognition and face tracking will look like baby-steps. Smart developers will come up with killer applications that are currently unimaginable. For example, how about a library implementation of Paul Ekman’s emotion recognition algorithms to let you know how you are really feeling right now? Or, in concert with Google Glass, so you will never again be oblivious to your spouse’s emotional temperature.
Update November 19th: Here‘s some news and a little bit of technology background on this topic…
Update November 22:It looks like a company is already engaged on the emotion-recognition technology.
October 24, 2013
Last month, WebRTC was included in Firefox for Android. It has been available for a while in Chrome, Mozilla, and Opera browsers. Justin Uberti of Google claims that this adds up to a billion devices running WebRTC.
To get up to speed on the challenges and opportunities of WebRTC, and the future of real-time voice/video communications, Wirevolution’s Charlie Gold will be attending the WebRTC World Conference next month and writing about it here. The conference runs Nov 19-21 at the Convention Center in Santa Clara, CA.
At the conference we hope to make sense of the wide range of applications integrating WebRTC, and to relate them to integration opportunities for service providers. The applications range from standard video enabled apps such as webcasting and security, to enterprise applications such as CRM and contact centers, to emerging opportunities such as virtual reality and gaming. Service providers can combine webRTC with IMS and RCS, and can use it to manage network capabilities and end users’ quality of experience.
The conference is organized around four tracks: developer workshops, B2C, enterprise, and service providers.
- The developer workshop topics include concepts and structures of WebRTC, implementation and build options, standardization efforts, signaling options, applications to the internet of things (IoT), codec evolution, monitoring and alarms, firewall traversal with STUN/TURN/ICE, and large scale simultaneous delivery in applications such as webcasting, gaming and virtual reality (VR) and security.
- Business and consumer applications sessions cover successful deployment strategies of use cases like collaboration and conferencing, call centers and the Internet of Things (IoT). Other sessions on this track cover security, device requirements and regulatory issues.
- Service provider workshops include IMS value in a world of WebRTC, how to use WebRTC, deployment strategies, how to extend existing services and offer new services using WebRTC, using WebRTC to acquire new users, and understanding the network impact of WebRTC.
- The enterprise track has additional sessions on integrating WebRTC into your contact center and websites (public, supplier, internal). These sessions cover details like mapping out your integration strategy between WebRTC and SIP, using a media server vs. direct media interoperation; and how to deploy a WebRTC portal.
Keynotes will be from Ericsson, Alcatel-Lucent, Mozilla, Genband, Mavenir, Radisys, CafeX and presumably others.
To round it out, there will be a plethora of special workshops, realtime demos, panels and round tables.
With the momentum of WebRTC growing in leaps and bounds, we are looking forward to attending and sharing more on webRTC next month.
October 11, 2013
Please download and use the RootMetrics app.
Their coverage maps are highly informative, and the more people that contribute, the more useful the results become.
September 30, 2013
In Wi-Fi multi path used to mean the way that signals traversing different spatial paths arrive at different times. Before MIMO this was considered an impairment to the signal, but with multiple antennas this ‘spatial diversity’ is used to deliver the huge speed increases of 802.11n over 802.11g.
This week has seen a big PR push on a different “multipath,” embodied in IETF RFC 6824, “TCP Extensions for Multipath Operation with Multiple Addresses,” (MPTCP). Of course you know that one of the robustness features of IP is that it delivers packets over different routes, but RFC 6824 takes it to another level, as described in this article in Ars Technica, and this one at MIT Technology Review.
June 25, 2013
The compelling advantage of VoIP is that it is far cheaper than circuit switched technology. But VoIP calls often sound horrible. It doesn’t have to be this way. Although VoIP is intrinsically prone to jitter, delay and packet loss, good system design can mitigate all these impairments. The simplest solution is over-provisioning bandwidth.
The lowest bandwidth leg of a VoIP call, where the danger of delayed or lost packets is the greatest, is usually the ‘last mile’ WAN connection from the ISP to the customer premises. This is also where bandwidth is most expensive.
On this last leg, you tend to get what you pay for. Cheap connections are unreliable. Since businesses live or die with their phone service, they are motivated to pay top dollar for a Service Level Agreement specifying “five nines” reliability. But there’s more than one way to skin a cat. Modern network architectures achieve high levels of reliability through redundant low-cost, less reliable systems. For example, to achieve 99.999% aggregate reliability, you could combine two independent systems (two ISPs) each with 99.7% reliability, three each with 97.8% reliability, or four each with 94% reliability. In other words, if your goal is 5 minutes or less of system down-time per year, with two ISPs you could tolerate 4 minutes of down-time per ISP per day. With 3 ISPs, you could tolerate 30 minutes of down-time per ISP per day.
Here’s a guest post from Dr. Cahit Jay Akin of Mushroom Networks, describing how to do this:
Clearing the Cloud for Reliable, Crystal-Clear VoIP Services
More companies are interested in cloud-based VoIP services, but concerns about performance hold them back. Now there are technologies that can help.
There’s no question that hosted, cloud-based Voice over IP (VoIP) and IP-PBX technologies are gaining traction, largely because they reduce costs for equipment, lines, manpower, and maintenance. But there are stumbling blocks – namely around reliability, quality and weak or non-existent failover capabilities – that are keeping businesses from fully committing.
Fortunately, there are new and emerging technologies that can optimize performance without the need for costly upgrades to premium Internet services. These technologies also protect VoIP services from jitter, latency caused by slow network links, and other common unpredictable behaviors of IP networks that impact VoIP performance. For example, Broadband Bonding, a technique that bonds various Internet lines into a single connection, boosts connectivity speeds and improves management of the latency within an IP tunnel. Using such multiple links, advanced algorithms can closely monitor WAN links and make intelligent decisions about each packet of traffic to ensure nothing is ever late or lost during communication.
VoIP Gains Market Share
The global VoIP services market, including residential and business VoIP services, totaled $63 billion in 2012, up 9% from 2011, according to market research firm Infonetics. Infonetics predicts that the combined business and residential VoIP services market will grow to $82.7 billion in 2017. While the residential segment makes up the majority of VoIP services revenue, the fastest-growing segment is hosted VoIP and Unified Communications (UC) services for businesses. Managed IP-PBX services, which focus on dedicated enterprise systems, remain the largest business VoIP services segment.
According to Harbor Ridge Capital LLC, which did an overview of trends and mergers & acquisitions activity of the VoIP market in early 2012, there are a number of reasons for VoIP’s growth. Among them: the reduction in capital investments and the flexibility hosted VoIP provides, enabling businesses to scale up or down their VoIP services as needed. Harbor Ridge also points out a number of challenges, among them the need to improve the quality of service and meet customer expectations for reliability and ease of use.
But VolP Isn’t Always Reliable
No business can really afford a dropped call or a garbled message left in voicemail. But these mishaps do occur when using pure hosted VoIP services, largely because they are reliant on the performance of the IP tunnel through which the communications must travel. IP tunnels are inevitably congested and routing is unpredictable, two factors that contribute to jitter, delay and lost packets, which degrade the quality of the call. Of course, if an IP link goes down, the call is dropped.
Hosted, cloud-based VoIP services offer little in the way of traffic prioritization, so data and voice fight it out for Internet bandwidth. And there’s little monitoring available. IP-PBX servers placed in data centers or at the company’s headquarters can help by providing some protection over pure hosted VoIP services. They offer multiple WAN interfaces that let businesses add additional, albeit costly, links to serve as backups if one fails. Businesses can also take advantage of the various functions that an IP-PBX system offers, such as unlimited extensions and voice mail boxes, caller ID customizing, conferencing, interactive voice response and more. But IP-PBXes are still reliant on the WAN performance and offer limited monitoring features. Thus, users and system administrators might not even know about an outage until they can’t make or receive calls. Some hosted VoIP services include a hosted IP-PBX, which typically include back-up and storage and failover functions, as well as limited monitoring.
Boosting Performance through Bonding and Armor
Mushroom Networks has developed several technologies designed to improve the performance, reliability and intelligence of a range of Internet connection applications, including VoIP services. The San Diego, Calif., company’s WAN virtualization solution leverages virtual leased lines (VLLs) and its patented Broadband Bonding, a technique that melds various numbers of Internet lines into a single connection. WAN virtualization is a software-based technology that uncouples operating systems and applications from the physical hardware, so infrastructure can be consolidated and application and communications resources can be pooled within virtualized environments. WAN virtualization adds intelligence and management so network managers can dynamically build a simpler, higher-performing IP pipe out of real WAN resources, including existing private WANs and various Internet WAN links like DSL, cable, fiber, wireless and others. The solution is delivered via the Truffle appliance, a packet level load balancing router with WAN aggregation and Internet failover technology.
Using patented Broadband Bonding techniques, Truffle bonds various numbers of Internet lines into a single connection to ensure voice applications are clear, consistent and redundant. This provides faster connectivity via the sum of all the line speeds as well as intelligent management of the latency within the tunnel. Broadband Bonding is a cost effective solution for even global firms that have hundreds of branch offices scattered around the world because it can be used with existing infrastructures, enabling disparate offices to have the same level of connectivity as the headquarters without the outlay of too much capital. The end result is a faster connection with multiple built-in redundancies that can automatically shield negative network events and outages from the applications such as VoIP. Broadband Bonding also combines the best attributes of the various connections, boosting speeds and reliability.
Mushroom Networks’ newest technology, Application Armor, shields VoIP services from the negative effects of IP jitter, latency, packet drops, link disconnects and other issues. This technology relies on a research field known as Network Calculus, that models and optimizes communication resources. Through decision algorithms, Application Armor monitors traffic and refines routing in the aggregated, bonded pipe by enforcing application-specific goals, whether it’s throughput or reduced latency.
VoIP at Broker Houlihan Lawrence – Big Savings and Performance
New York area broker Houlihan Lawrence – the nation’s 15th largest independent realtor – has cut its telecommunications bill by nearly 75 percent by deploying Mushroom Networks’ Truffle appliances in its branch offices. The agency began using Truffle shortly after Superstorm Sandy took out the company’s slow and costly MPLS communications network when it landed ashore near Atlantic City, New Jersey last year. After the initial deployment to support mission-critical data applications including customer relationship management and email, Houlihan Lawrence deployed a state-of-the-art VOIP system and runs voice communications through Mushroom Networks’ solution. The ability to diversify connections across multiple providers and multiple paths assures automated failover in the event a connection goes down, and the Application Armor protects each packet, whether it’s carrying voice or data, to ensure quality and performance are unfailing and crystal clear.
Hosted, cloud-based Voice over IP (VoIP) and IP-PBX technologies help companies like Houlihan Lawrence dramatically reduce costs for equipment, lines, manpower, and maintenance. But those savings are far from ideal if they come without reliability, quality and failover capabilities. New technologies, including Mushroom Networks’ Broadband Bonding and Application Armor, can optimize IP performance, boost connectivity speeds, improve monitoring and shield VoIP services from jitter, latency, packet loss, link loss and other unwanted behaviors that degrade performance.
Dr. Cahit Jay Akin is the co-founder and chief executive officer of Mushroom Networks, a privately held company based in San Diego, CA, providing broadband products and solutions for a range of Internet applications.
May 14, 2013
According to ComputerWeekly.com, “Nearly half of firms supporting BYOD report data breaches.” PWC’s 2013 Information Security Breaches Survey said “9% of large organisations had a security or data breach in the last year involving smartphones or tablets.” But as you know, correlation is not causation, and those quotes may imply a greater danger from BYOD than has yet been observed.
One of the most authoritative and exhaustive analyses of cyber security is Verizon’s annual “Data Breach Investigations Report.” The 2013 edition of the report analyzes over 47,000 ‘security incidents,’ including 621 ‘data breaches.’ It says:
The “Bring Your Own Device” (BYOD) trend is a current topic of debate and planning in many organizations. Unfortunately, we don’t have much hard evidence to offer from our breach data. We saw only one breach involving personally-owned devices in 2011 and a couple more in 2012.
So if your main concern is corporate data breach, the situation is not yet as dire on the mobile side as it is on the non-mobile side. But the Verizon report cautions:
Obviously mobile malware is a legitimate concern. Nevertheless, data breaches involving mobile devices in the breach event chain are still uncommon in the types of cases Verizon and our DBIR partners investigate. However, we do expect them to make more of an appearance in our data as mobile payment systems continue to become more common.
Two reports that focus on mobile malware are Trend Micro’s “Mobile Threat and Security Roundup,” and one I mentioned in a previous post, BlueCoat’s “2013 Mobile Malware Report.”
According to Trend:
In 2012, we detected 350,000 malicious and high-risk Android app samples, showing a significant increase from the 1,000 samples seen in 2011. It took less than three years for malicious and high-risk Android apps to reach this number—a feat that took Windows malware 14 years.
Just as Windows malware varied, so did Android malware—around
605 new malicious families were detected in 2012. Premium service abusers, which charge users for sending text messages to a premium- rate number, comprised the top mobile threat type, with transactions typically costing users US$9.99 a month. And victims of mobile threats didn’t just lose money, they also lost their privacy. The issue of data leakage continued to grow as more ad networks accessed and gathered personal information via aggressive adware.
Aggressive adware in mobile devices are now similar to the notorious spyware, adware, and click-fraud malware rampant in the early days of the PC malware era. They, like PC malware, generate profit by selling user data. PC malware took advantage of loopholes in legitimate ads and affiliate networks, while today’s aggressive adware can cause data leakages that aren’t always limited to malicious apps. Even popular and legitimate apps can disclose data.
The BlueCoat report concurs with this assessment:
Mobile threats are still largely mischiefware – they have not yet broken the device’s security model but are instead more focused on for-pay texting scams or stealing personal information.
So mobile malware is exploding, but so far targeting individuals in relatively trivial thefts. The Trend report observes that mobile threats are recapitulating the history of computer threats, but faster. Expect to see the mobile device threat level increase.
March 22, 2013
Blue Coat Systems has published an interesting report on the state of mobile malware. The good news is that in the words of the report “the devices’ security model” is not yet “broken.” This means that smartphones and tablets are still rarely hijacked by viruses in the way that computers commonly are.
Now for the bad news. On the Android side (though apparently not yet on the iOS side), virus-style hijackings have begun to appear:
Blue Coat WebPulse collaborative defense first detected an Android exploit in real time on February 5, 2009. Since then, Blue Coat Security Labs has observed a steady increase in Android malware. In the July-September 2012 quarter alone, Blue Coat
Security Labs saw a 600 percent increase in Android malware over the same period last year.
But this increase is from a minuscule base, and this type of threat is still relatively minor on mobile devices. Instead the report says, “user behavior becomes the Achilles heel.” The main mobile threats are from what the report calls “mischiefware.”
Mischiefware works by enticing the user into doing something unintentional. The two main categories of Mischiefware are:
- Phishing, which tricks users into disclosing personal information that can be used for on-line theft.
- Scamming, which tricks users into paying far more than they expect for something – like for-pay text (SMS) messages or in-app purchases. Even legitimate service providers can be guilty of this type of ‘gotcha’ activity, with rapacious international data roaming charges, or punitive overage charges on monthly ‘plans.’
“User behavior becomes the Achilles Heel” is hardly a revelation. A more appropriate phrase would be “User behavior remains the Achilles Heel,” since in this respect the mobile world is no different from the traditional networking world.
March 8, 2013
Smartphones and tablets have plenty of computing power to host malware, and they are simultaneously connected to the Internet via a cellular connection and to the LAN via Wi-Fi. So everybody in your organization has a device capable of by-passing your firewall in their pocket.
The good news is that smartphone OSes were designed recently enough that their creators were able to build security into the platforms using techniques like ARM TrustZone, and “chain of trust.” Technologies of this type are merely optional on PCs. Plus,the Android and iPhone app stores tightly control the applications that they distribute, and most people don’t take the trouble to avoid this protection. With these system-level and application-level protections, smartphones and tablets are intrinsically less vulnerable than PCs.
But there’s plenty of bad news, too. The chain of trust isn’t foolproof, and malicious code can get through the app store certification process.
On top of these traditional threats, a new one looms: HTML 5. Adobe Flash is so notoriously vulnerable that Steve Jobs refused to let it onto the iPhone. Adobe has now thrown in the towel, and committed to HTML 5 instead. HTML 5 is presumably safer than Flash, but it is untried, and it has powerful access to the platform more akin to a native app than to traditional HTML.
This means that we can expect a rising tide of smartphone-related security breaches.
January 16, 2013
Some ideas are so obvious once you hear them that you feel like you already had them yourself. One such is a new application for Wi-Fi from a company called Euclid Analytics.
Euclid’s idea is to provide Google Analytics-style information on foot traffic in retail stores. They implement it using the Wi-Fi on smart phones. This is technologically trivial: if you leave the Wi-Fi on your phone turned on, it will periodically transmit Wi-Fi packets, for example ‘probe requests.’ Every packet transmitted by a device contains a unique identifier for that device, the MAC address. So by gathering this information from a Wi-Fi access point, Euclid can tell how often and for how long each device is in the vicinity. Presumably enough people have Wi-Fi on their phones by now to gather statistically representative data for analytics purposes.
The Euclid technology doesn’t require your opt-in, and it doesn’t need to be tied to Wi-Fi. The concept can trivially be extended to any phone (not just Wi-Fi equipped ones) by using cellular packets rather than Wi-Fi, and for people with no phone, face recognition with in-store cameras. For this kind of application even 90% accuracy on the face recognition would be useful.
One of the only four choices on Euclid’s website’s navigation menu is Privacy. Privacy gets this prominent treatment because the privacy issues raised by this technology are immense.
Gathering this kind of information for one store – anonymous traffic by time, duration of stay, repeat visits and so on doesn’t seem too intrusive on individuals, but Euclid will be tempted to aggregate it across all the stores in the world, and to correlate its data with other data that stores already gather, like point of sale records.
Many technology sophisticates I talk with tell me that it is naive to expect any privacy whatsoever in the Internet age, and I guess this is another example. Euclid will effectively know where you are most of the time, but it won’t know much more than your cellular provider, or any any of the app vendors to whom you have given location permission on your phone.