FCC Title II Ruling

I got an email from the Heartland Institute today, purporting to give an expert opinion about today’s Net Neutrality ruling. The money quote reads: “The Internet is not broken, it is a vibrant, continually growing market that has thrived due to the lack of regulations that Title II will now infest upon it.”

This is wrong both on Internet history, and on the current state of broadband in the US.

It was the common carriage regulatory requirement on voice lines that first enabled the Internet to explode into the consumer world, by obliging the phone companies to allow consumers to hook up modems to their voice lines. It is the current unregulated environment in the US that has caused our Internet to become, if not broken, at least considerably worse than it is in many other countries:

America currently ranks thirty-first in the world in terms of average download speeds and forty-second in average uploads speeds, according to a recent study by Ookla Speedtest. Consumers pay much more for Internet access in the U.S. than in many other countries.

Net Neutrality, Congestion, DRM

Videos burn up a lot more bandwidth than written words, per hour of entertainment. The Encyclopedia Britannica is 0.3 GB in size, uncompressed. The movie Despicable Me is 1.2 GB, compressed. Consequently we should not be surprised that most Internet traffic is video traffic:

The main source of the video traffic is Netflix, followed by YouTube:

Internet Service Providers would like to double-dip, charging you for your Internet connection, and also charging Netflix (which already pays a different ISP for its connection) for delivering its content to you. And they do.

To motivate content providers like Netflix to pay extra, ISPs that don’t care about their subscribers could hold them to ransom, using network congestion to make Neflix movies look choppy, blocky and freezy until Neflix coughs up. And they do:


This example illustrates the motivation structure of the industry. Bandwidth demand is continuously growing. The two basic strategies an ISP can use to cope with the growth are either to increase capacity or to ration the existing bandwidth. The Internet core is sufficiently competitive that its capacity grows by leaps and bounds. The last mile to the consumer is far less competitive, so the ISP has little motivation to upgrade its equipment. It can simply prioritize packets from Netflix and whoever else is prepared to pay the toll, and let the rest drop undelivered.

One might expect customers to complain if this was happening in a widespread way. And they do:

Free market competition might be a better answer to this particular issue than regulation, except that this problem isn’t really amenable to competition; you need a physical connection (fiber ideally) for the next generation of awesome immersive Internet. Running a network pipe to the home is expensive, like running a gas pipe, or a water pipe, or a sewer, or an electricity supply cable, or a road; so like all of those instances, it is a natural monopoly. Natural monopolies work best when strongly regulated, and the proposed FCC Title II action on Net Neutrality is a good start.

Digital Rights Management

Unrelated but easily confused with Net Neutrality is the issue of copyright protection. The Stop Online Piracy Act, or SOPA, was defeated by popular outcry for being too expansive. The remedies proposed by SOPA were to take down websites hosting illegal content, and to oblige ISPs to block illegal content from their networks.

You might have noticed in the first graphic above, about 3% of what consumers consume (“Downstream”) online is “filesharing,” a.k.a music and video piracy. It is pretty much incontrovertible that the Internet has devastated the music business. One might debate whether it was piracy or iTunes that did it in, but either way the fact of Internet piracy gave Steve Jobs a lot of leverage in his negotiations with the music industry. What’s to prevent a similar disembowelment of the movie industry, when a consumer in Dallas can watch a movie like “Annie” for free in his home theater before it has even been released?

The studio that distributes the movie would like to make sure you pay for seeing it, and don’t get a pirated copy. I think so too. This is a perfectly reasonable position to take, and if the studio was also your ISP, it might feel justified in blocking suspicious content. In the US it is not unusual for the studio to be your ISP (for example if your ISP is Comcast and the movie is Despicable Me). In a non-net-neutral world an ISP could block content unilaterally. But Net Neutrality says that an ISP can’t discriminate between packets based on content or origin. So in a net-neutral world, an ISP would be obliged to deliver pirated content, even when one of its own corporate divisions was getting ripped off.

This dilemma is analogous to free speech. The civilized world recognizes that in order to be free ourselves, we have to put up with some repulsive speech from other people. The alternative is censorship: empowering some bureaucrat to silence people who say unacceptable things. Enlightened states don’t like to go there, because they don’t trust anybody to define what’s acceptable. Similarly, it would be tough to empower ISPs to suppress content in a non-arbitrary but still timely way, especially when the content is encrypted and the source is obfuscated. Opposing Net Neutrality on the grounds of copyright protection is using the wrong tool for the job. It would be much better to find an alternative solution to piracy.

Actually, maybe we have. The retail world has “shrinkage” of about 1.5%. The credit card industry remains massively profitable even while factoring in a provision for fraud at about 3% of customers compromised.

Total Existing Card Fraud Losses and Incidence Rate by Year. Source: Lexis/Nexis.

“Filesharing” at 3% of download volume seems manageable in that context, especially since it has trended down from 10% in 2011.

ALA Troubled by Court’s Net Neutrality Decision

My thoughts on network neutrality can be found here and some predictions contingent on its loss here, so obviously I am disheartened by this latest ruling. The top Google hit on this news is currently a good story at GigaOm, and further down Google’s hit list is a thoughtful article in Forbes, predicting this result, but coming to the wrong conclusion.

I am habitually skeptical of “slippery slope” arguments, where we are supposed to fear something that might happen, but hasn’t yet. So I sympathize with pro-ISP sentiments like that Forbes article in this regard. On the other hand, I view businesses as tending to be rational actors, maximizing their profits under the rules of the game. If the rules of the game incent the ISPs to move in a particular direction, they will tend to move in that direction. Because competition is so limited among broadband ISPs (for any home in America there are rarely more than two options, regardless of the actual number of ISPs in the nation), they are currently incented to ration their bandwidth rather than to invest in increasing it. This decision is a push in that same direction.

Arguably the Internet was born of Federal action that forced a corporation to do something it didn’t want to do: without the Carterfone decision, there would have been no modems in the US. Without modems, the Internet would never have gotten off the ground.

Arguments that government regulation could stifle the Internet miss the point that all business activity in the US is done under government rules of various kinds: without those rules competitive market capitalism could not work. So the debate is not over whether the government should ‘interfere,’ but over what kinds of interference the government should do, and with what motivations. I take the liberal view that a primary role of government is to protect citizens from exploitation by predators. I am an enthusiastic advocate of competitive-market capitalism too, where it can exist. The structure of capitalism pushes corporations to charge as much as possible and provide as little as possible for the money (‘maximize profit’). In a competitive market, the counter-force to this is competition: customers can get better, cheaper service elsewhere, or forgo service without harm. But because of the local lack of competition, broadband in the US is not a competitive market, so there is no counter-force. And since few would argue that you can live effectively in today’s US without access to the Internet, you can’t forgo service without harm.

The current rules of the broadband game in the US have moved us to a pathetically lagging position internationally so it seems reasonable to change them. Unfortunately this latest court decision changes them in the wrong direction, freeing ISPs to ration and charge more for connectivity rather than encouraging them to invest in bandwidth. If you agree that this is a bad thing, you can do some token venting here: http://act.freepress.net/sign/internet_FCC_court_decision2/

Here is a press release from an organization that few people could find fault with.

Gesture recognition in smartphones

This piece from the Aberdeen Group shows accelerometers and gyroscopes becoming universal in smartphones by 2018.

Accelerometers were exotic in smartphones when the first iPhone came out – used mainly for sensing the orientation of the phone for displaying portrait or landscape mode. Then came the idea of using them for dead-reckoning-assist in location-sensing. iPhones have always had accelerometers; since all current smartphones are basically copies of the original iPhone, it is actually odd that some smartphones lack accelerometers.

Predictably, when supplied with a hardware feature, the app developer community came up with a ton of creative uses for the accelerometer: magic tricks, pedometers, air-mice, and even user authentication based on waving the phone around.

Not all sensor technologies are so fertile. For example the proximity sensor is still pretty much only used to dim the screen and disable the touch sensing when you hold the phone to your ear or put it in your pocket.

So what about the user-facing camera? Is it a one-trick pony like the proximity sensor, or a springboard to innovation like the accelerometer? Although videophoning has been a perennial bust, I would argue for the latter: the you-facing camera is pregnant with possibilities as a sensor.

Looking at the Aberdeen report, I was curious to see “gesture recognition” on a list of features that will appear on 60% of phones by 2018. The others on the list are hardware features, but once you have a camera, gesture recognition is just a matter of software. (The Kinect is a sidetrack to this, provoked by lack of compute power.)

In a phone, compute power means battery-drain, so that’s a limitation to using the camera as a sensor. But each generation of chips becomes more power-efficient as well as more powerful, and as phone makers add more and more GPU cores, the developer community delivers new useful uses for them that max them out.

Gesture recognition is already here with Samsung, and soon every Android device. The industry is gearing up for innovation in phone based computer vision with OpenVX from Khronos. When always-on computer vision becomes feasible from a power-drain point of view, gesture recognition and face tracking will look like baby-steps. Smart developers will come up with killer applications that are currently unimaginable. For example, how about a library implementation of Paul Ekman’s emotion recognition algorithms to let you know how you are really feeling right now? Or, in concert with Google Glass, so you will never again be oblivious to your spouse’s emotional temperature.
Update November 19th: Here‘s some news and a little bit of technology background on this topic…
Update November 22:It looks like a company is already engaged on the emotion-recognition technology.

Upcoming WebRTC World Conference

Last month, WebRTC was included in Firefox for Android. It has been available for a while in Chrome, Mozilla, and Opera browsers. Justin Uberti of Google claims that this adds up to a billion devices running WebRTC.

To get up to speed on the challenges and opportunities of WebRTC, and the future of real-time voice/video communications, Wirevolution’s Charlie Gold will be attending the WebRTC World Conference next month and writing about it here. The conference runs Nov 19-21 at the Convention Center in Santa Clara, CA.

At the conference we hope to make sense of the wide range of applications integrating WebRTC, and to relate them to integration opportunities for service providers. The applications range from standard video enabled apps such as webcasting and security, to enterprise applications such as CRM and contact centers, to emerging opportunities such as virtual reality and gaming. Service providers can combine webRTC with IMS and RCS, and can use it to manage network capabilities and end users’ quality of experience.

The conference is organized around four tracks: developer workshops, B2C, enterprise, and service providers.

  • The developer workshop topics include concepts and structures of WebRTC, implementation and build options, standardization efforts, signaling options, applications to the internet of things (IoT), codec evolution, monitoring and alarms, firewall traversal with STUN/TURN/ICE, and large scale simultaneous delivery in applications such as webcasting, gaming and virtual reality (VR) and security.
  • Business and consumer applications sessions cover successful deployment strategies of use cases like collaboration and conferencing, call centers and the Internet of Things (IoT). Other sessions on this track cover security, device requirements and regulatory issues.
  • Service provider workshops include IMS value in a world of WebRTC, how to use WebRTC, deployment strategies, how to extend existing services and offer new services using WebRTC, using WebRTC to acquire new users, and understanding the network impact of WebRTC.
  • The enterprise track has additional sessions on integrating WebRTC into your contact center and websites (public, supplier, internal). These sessions cover details like mapping out your integration strategy between WebRTC and SIP, using a media server vs. direct media interoperation; and how to deploy a WebRTC portal.

Keynotes will be from Ericsson, Alcatel-Lucent, Mozilla, Genband, Mavenir, Radisys, CafeX and presumably others.

To round it out, there will be a plethora of special workshops, realtime demos, panels and round tables.

With the momentum of WebRTC growing in leaps and bounds, we are looking forward to attending and sharing more on webRTC next month.

Multipath ambiguity

In Wi-Fi multi path used to mean the way that signals traversing different spatial paths arrive at different times. Before MIMO this was considered an impairment to the signal, but with multiple antennas this ‘spatial diversity’ is used to deliver the huge speed increases of 802.11n over 802.11g.

This week has seen a big PR push on a different “multipath,” embodied in IETF RFC 6824, “TCP Extensions for Multipath Operation with Multiple Addresses,” (MPTCP). Of course you know that one of the robustness features of IP is that it delivers packets over different routes, but RFC 6824 takes it to another level, as described in this article in Ars Technica, and this one at MIT Technology Review.

Clearing the Cloud for Reliable, Crystal-Clear VoIP Services

The compelling advantage of VoIP is that it is far cheaper than circuit switched technology. But VoIP calls often sound horrible. It doesn’t have to be this way. Although VoIP is intrinsically prone to jitter, delay and packet loss, good system design can mitigate all these impairments. The simplest solution is over-provisioning bandwidth.

The lowest bandwidth leg of a VoIP call, where the danger of delayed or lost packets is the greatest, is usually the ‘last mile’ WAN connection from the ISP to the customer premises. This is also where bandwidth is most expensive.

On this last leg, you tend to get what you pay for. Cheap connections are unreliable. Since businesses live or die with their phone service, they are motivated to pay top dollar for a Service Level Agreement specifying “five nines” reliability. But there’s more than one way to skin a cat. Modern network architectures achieve high levels of reliability through redundant low-cost, less reliable systems. For example, to achieve 99.999% aggregate reliability, you could combine two independent systems (two ISPs) each with 99.7% reliability, three each with 97.8% reliability, or four each with 94% reliability. In other words, if your goal is 5 minutes or less of system down-time per year, with two ISPs you could tolerate 4 minutes of down-time per ISP per day. With 3 ISPs, you could tolerate 30 minutes of down-time per ISP per day.

Here’s a guest post from Dr. Cahit Jay Akin of Mushroom Networks, describing how to do this:

Clearing the Cloud for Reliable, Crystal-Clear VoIP Services

More companies are interested in cloud-based VoIP services, but concerns about performance hold them back. Now there are technologies that can help.

There’s no question that hosted, cloud-based Voice over IP (VoIP) and IP-PBX technologies are gaining traction, largely because they reduce costs for equipment, lines, manpower, and maintenance. But there are stumbling blocks – namely around reliability, quality and weak or non-existent failover capabilities – that are keeping businesses from fully committing.

Fortunately, there are new and emerging technologies that can optimize performance without the need for costly upgrades to premium Internet services. These technologies also protect VoIP services from jitter, latency caused by slow network links, and other common unpredictable behaviors of IP networks that impact VoIP performance. For example, Broadband Bonding, a technique that bonds various Internet lines into a single connection, boosts connectivity speeds and improves management of the latency within an IP tunnel. Using such multiple links, advanced algorithms can closely monitor WAN links and make intelligent decisions about each packet of traffic to ensure nothing is ever late or lost during communication.

VoIP Gains Market Share

The global VoIP services market, including residential and business VoIP services, totaled $63 billion in 2012, up 9% from 2011, according to market research firm Infonetics. Infonetics predicts that the combined business and residential VoIP services market will grow to $82.7 billion in 2017. While the residential segment makes up the majority of VoIP services revenue, the fastest-growing segment is hosted VoIP and Unified Communications (UC) services for businesses. Managed IP-PBX services, which focus on dedicated enterprise systems, remain the largest business VoIP services segment.

According to Harbor Ridge Capital LLC, which did an overview of trends and mergers & acquisitions activity of the VoIP market in early 2012, there are a number of reasons for VoIP’s growth. Among them: the reduction in capital investments and the flexibility hosted VoIP provides, enabling businesses to scale up or down their VoIP services as needed. Harbor Ridge also points out a number of challenges, among them the need to improve the quality of service and meet customer expectations for reliability and ease of use.

But VolP Isn’t Always Reliable

No business can really afford a dropped call or a garbled message left in voicemail. But these mishaps do occur when using pure hosted VoIP services, largely because they are reliant on the performance of the IP tunnel through which the communications must travel. IP tunnels are inevitably congested and routing is unpredictable, two factors that contribute to jitter, delay and lost packets, which degrade the quality of the call. Of course, if an IP link goes down, the call is dropped.

Hosted, cloud-based VoIP services offer little in the way of traffic prioritization, so data and voice fight it out for Internet bandwidth. And there’s little monitoring available. IP-PBX servers placed in data centers or at the company’s headquarters can help by providing some protection over pure hosted VoIP services. They offer multiple WAN interfaces that let businesses add additional, albeit costly, links to serve as backups if one fails. Businesses can also take advantage of the various functions that an IP-PBX system offers, such as unlimited extensions and voice mail boxes, caller ID customizing, conferencing, interactive voice response and more. But IP-PBXes are still reliant on the WAN performance and offer limited monitoring features. Thus, users and system administrators might not even know about an outage until they can’t make or receive calls. Some hosted VoIP services include a hosted IP-PBX, which typically include back-up and storage and failover functions, as well as limited monitoring.

Boosting Performance through Bonding and Armor

Mushroom Networks has developed several technologies designed to improve the performance, reliability and intelligence of a range of Internet connection applications, including VoIP services. The San Diego, Calif., company’s WAN virtualization solution leverages virtual leased lines (VLLs) and its patented Broadband Bonding, a technique that melds various numbers of Internet lines into a single connection. WAN virtualization is a software-based technology that uncouples operating systems and applications from the physical hardware, so infrastructure can be consolidated and application and communications resources can be pooled within virtualized environments. WAN virtualization adds intelligence and management so network managers can dynamically build a simpler, higher-performing IP pipe out of real WAN resources, including existing private WANs and various Internet WAN links like DSL, cable, fiber, wireless and others. The solution is delivered via the Truffle appliance, a packet level load balancing router with WAN aggregation and Internet failover technology.

Using patented Broadband Bonding techniques, Truffle bonds various numbers of Internet lines into a single connection to ensure voice applications are clear, consistent and redundant. This provides faster connectivity via the sum of all the line speeds as well as intelligent management of the latency within the tunnel. Broadband Bonding is a cost effective solution for even global firms that have hundreds of branch offices scattered around the world because it can be used with existing infrastructures, enabling disparate offices to have the same level of connectivity as the headquarters without the outlay of too much capital. The end result is a faster connection with multiple built-in redundancies that can automatically shield negative network events and outages from the applications such as VoIP. Broadband Bonding also combines the best attributes of the various connections, boosting speeds and reliability.

Mushroom Networks’ newest technology, Application Armor, shields VoIP services from the negative effects of IP jitter, latency, packet drops, link disconnects and other issues. This technology relies on a research field known as Network Calculus, that models and optimizes communication resources. Through decision algorithms, Application Armor monitors traffic and refines routing in the aggregated, bonded pipe by enforcing application-specific goals, whether it’s throughput or reduced latency.

VoIP at Broker Houlihan Lawrence – Big Savings and Performance

New York area broker Houlihan Lawrence – the nation’s 15th largest independent realtor – has cut its telecommunications bill by nearly 75 percent by deploying Mushroom Networks’ Truffle appliances in its branch offices. The agency began using Truffle shortly after Superstorm Sandy took out the company’s slow and costly MPLS communications network when it landed ashore near Atlantic City, New Jersey last year. After the initial deployment to support mission-critical data applications including customer relationship management and email, Houlihan Lawrence deployed a state-of-the-art VOIP system and runs voice communications through Mushroom Networks’ solution. The ability to diversify connections across multiple providers and multiple paths assures automated failover in the event a connection goes down, and the Application Armor protects each packet, whether it’s carrying voice or data, to ensure quality and performance are unfailing and crystal clear.

Hosted, cloud-based Voice over IP (VoIP) and IP-PBX technologies help companies like Houlihan Lawrence dramatically reduce costs for equipment, lines, manpower, and maintenance. But those savings are far from ideal if they come without reliability, quality and failover capabilities. New technologies, including Mushroom Networks’ Broadband Bonding and Application Armor, can optimize IP performance, boost connectivity speeds, improve monitoring and shield VoIP services from jitter, latency, packet loss, link loss and other unwanted behaviors that degrade performance.

Dr. Cahit Jay Akin is the co-founder and chief executive officer of Mushroom Networks, a privately held company based in San Diego, CA, providing broadband products and solutions for a range of Internet applications.

BYOD Cyber-Security. How concerned should you be?

According to ComputerWeekly.com, “Nearly half of firms supporting BYOD report data breaches.” PWC’s 2013 Information Security Breaches Survey said “9% of large organisations had a security or data breach in the last year involving smartphones or tablets.” But as you know, correlation is not causation, and those quotes may imply a greater danger from BYOD than has yet been observed.

One of the most authoritative and exhaustive analyses of cyber security is Verizon’s annual “Data Breach Investigations Report.” The 2013 edition of the report analyzes over 47,000 ‘security incidents,’ including 621 ‘data breaches.’ It says:

The “Bring Your Own Device” (BYOD) trend is a current topic of debate and planning in many organizations. Unfortunately, we don’t have much hard evidence to offer from our breach data. We saw only one breach involving personally-owned devices in 2011 and a couple more in 2012.

So if your main concern is corporate data breach, the situation is not yet as dire on the mobile side as it is on the non-mobile side. But the Verizon report cautions:

Obviously mobile malware is a legitimate concern. Nevertheless, data breaches involving mobile devices in the breach event chain are still uncommon in the types of cases Verizon and our DBIR partners investigate. However, we do expect them to make more of an appearance in our data as mobile payment systems continue to become more common.

Two reports that focus on mobile malware are Trend Micro’s “Mobile Threat and Security Roundup,” and one I mentioned in a previous post, BlueCoat’s “2013 Mobile Malware Report.”

According to Trend:

In 2012, we detected 350,000 malicious and high-risk Android app samples, showing a significant increase from the 1,000 samples seen in 2011. It took less than three years for malicious and high-risk Android apps to reach this number—a feat that took Windows malware 14 years.

Just as Windows malware varied, so did Android malware—around
605 new malicious families were detected in 2012. Premium service abusers, which charge users for sending text messages to a premium- rate number, comprised the top mobile threat type, with transactions typically costing users US$9.99 a month. And victims of mobile threats didn’t just lose money, they also lost their privacy. The issue of data leakage continued to grow as more ad networks accessed and gathered personal information via aggressive adware.

Aggressive adware in mobile devices are now similar to the notorious spyware, adware, and click-fraud malware rampant in the early days of the PC malware era. They, like PC malware, generate profit by selling user data. PC malware took advantage of loopholes in legitimate ads and affiliate networks, while today’s aggressive adware can cause data leakages that aren’t always limited to malicious apps. Even popular and legitimate apps can disclose data.

The BlueCoat report concurs with this assessment:

Mobile threats are still largely mischiefware – they have not yet broken the device’s security model but are instead more focused on for-pay texting scams or stealing personal information.

So mobile malware is exploding, but so far targeting individuals in relatively trivial thefts. The Trend report observes that mobile threats are recapitulating the history of computer threats, but faster. Expect to see the mobile device threat level increase.