802.11ac and 802.11ad. Why both?

The current generation of Wi-Fi, 802.11n, is soon to be superseded by not one, but two new flavors of Wi-Fi:

802.11n claims a maximum raw data rate of 600 megabits per second. 802.11ac and WiGig each claims over ten times this – about 7 megabits per second. So why do we need them both? The answer is that they have different strengths and weaknesses, and to some extent they can fill in for each other, and while they are useful for some of the same things, they each have use cases for which they are superior.

The primary difference between them is the spectrum they use. 802.11ac uses 5 GHz, WiGig uses 60 GHz. There are four main consequences of this difference in spectrum:

  • Available bandwidth
  • Propagation distance
  • Crowding
  • Antenna size

Available bandwidth:

There are no free lunches in physics. The maximum bit-rate of a wireless channel is limited by its bandwidth (i.e. the amount of spectrum allocated to it – 40 MHz in the case of old-style 802.11n Wi-Fi), and the signal-to-noise ratio. This limit is given by the Shannon-Hartley Theorem. So if you want to increase the speed of Wi-Fi you either have to allocate more spectrum, or improve the signal-to-noise ratio.
The amount of spectrum available to Wi-Fi is regulated by the FCC. At 5 GHz, Wi-Fi can use 0.55 GHz of spectrum. At 60 GHz, Wi-Fi can use 7 GHz of spectrum – over ten times as much. 802.11ac divides its spectrum into five 80 MHz channels, which can be optionally bonded into two and a half 160 MHz channels, as shown in this FCC graphic:

Channels in 5 GHz

802.11ad has it much easier:

“Worldwide, the 60 GHz band has much more spectrum available than the 2.4 GHz and 5 GHz bands – typically 7 GHz of spectrum, compared with 83.5 MHz in the 2.4 GHz band.
This spectrum is divided into multiple channels, as in the 2.4 GHz and 5 GHz bands. Because the 60 GHz band has much more spectrum available, the channels are much wider, enabling multi-gigabit data rates. The WiGig specification defines four channels, each 2.16 GHz wide – 50 times wider than the channels available in 802.11n.”

So for the maximum claimed data rates, 802.11ac uses channels 160 MHz wide, while WiGig uses 2,160 MHz per channel. That’s almost fourteen times as much, which makes life a lot easier for the engineers.

Propagation Distance:

60 GHz radio waves are absorbed by oxygen, but for LAN-scale distances this is not a significant factor. On the other hand, wood, bricks and particularly paint are far more opaque to 60 GHz waves:
Attenuation of various materials by frequency

Consequently WiGig is most suitable for in-room applications. The usual example for this is streaming high-def movies from your phone to your TV, but for their leading use case WiGig proponents have selected an even shorter range application: wireless docking for laptops.

Crowding:

5 GHz spectrum is also used by weather radar (Terminal Doppler Weather Radar or TDWR), and the FCC recently ruled that part of the 5 GHz spectrum is now completely prohibited to Wi-Fi. These channels are marked “X” in the graphic above. All the channels in the graphic designated “U-NII2” and “U-NII 3” are also subject to a requirement called “Dynamic Frequency Selection” or DFS, which says that if activity is detected at that frequency the Wi-Fi device must not use it.

5 GHz spectrum is not nearly as crowded as the 2.4 GHz spectrum used by most Wi-Fi, but it’s still crowded and cramped compared to the wide open vistas of 60 GHz. Even better, the poor propagation of 60 GHz waves means that even nearby transmitters are unlikely to cause interference. And with with beam-forming (discussed in the next section), even transmitters in the same room cause less interference. So WiGig wins on crowding in multiple ways.

Antenna size:

Antenna size and spacing is proportional to the wavelength of the spectrum being used. This means that 5 GHz antennas would be about an inch long and spaced about an inch apart. At 60 GHz, the antenna size and spacing would be about a tenth of this! So for handsets, multiple antennas are trivial to do at 60 GHz, more challenging at 5 GHz.

What’s so great about having multiple antennas? I mentioned earlier that there are no free lunches in physics, and that maximum bit-rate depends on channel bandwidth and signal to noise ratio. That’s how it used to be. Then in the mid-1990s engineers discovered (invented?) an incredible, wonderful free lunch: MIMO. Two adjacent antennas transmitting different signals on the same frequency normally interfere with each other. But the MIMO discovery was that if in this situation you also have two antennas on the receiver, and if there are enough things in the vicinity for the radio waves to bounce off (like walls, floors, ceilings, furniture and so on), then you can take the jumbled-up signals at the receiver, and with enough mathematical computer horsepower you can disentangle the signals completely, as if you had sent them on two different channels. And there’s no need to stop at two antennas. With four antennas on the transmitter and four on the receiver, you get four times the throughput. With eight, eight times. Multiple antennas receiving, multiple antennas sending. Multiple In, Multiple Out: MIMO. This kind of MIMO is called Spatial Multiplexing, and it is used in 802.11n and 802.11ac.

Another way multiple antennas can be used is “beam-forming.” This is where the same signal is sent from each antenna in an array, but at slightly different times. This causes interference patterns between the signals, which (with a lot of computation) can be arranged in such a way that the signals reinforce each other in a particular direction. This is great for WiGig, because it can easily have as many as 16 of its tiny antennas in a single array, even in a phone, and that many antennas can focus the beam tightly. Even better, shorter wavelengths tend to stay focused. So most of the transmission power can be aimed directly at the receiving device. So for a given power budget the signal can travel a lot further, or for a given distance the transmission power can be greatly reduced.

How does 802.11ac get to 6.9 Gigabits per second?

You know from a previous post how 802.11n gets to 600 megabits per second. 802.11ac does just three things to increase that by 1,056%:

  1. It adds a new Modulation and Coding Scheme (MCS) called 256-QAM. This increases the number of bits transmitted per symbol from 6 to 8, a factor of 1.33.
  2. It increases the maximum channel width from 40 MHz to 160 MHz (160 MHz is optional, but 80 MHz support is mandatory.) This increases the number of subcarriers from 108 to 468, a factor of 4.33.
  3. It increases the maximum MIMO configuration from 4×4 to 8×8, increasing the number of spatial streams by a factor of 2. Multi-User MIMO (MU-MIMO) with beamforming means that these spatial streams can be directed to particular clients, so while the AP may have 8 antennas, the clients can have less, for example 8 clients each with one antenna.

Put those factors together and you have 1.33 x 4.33 x 2 = 11.56. Multiply the 600 megabits per second of 802.11n by that factor and you get 600 x 11.56 = 6,933 megabits per second for 802.11ac.

Note that nobody does this yet, and 160 MHz channels and 8×8 MIMO are likely to remain unimplemented for a long time. For example Broadcom’s recently announced BCM4360 and Qualcomm’s QCA9860 do 80 MHz channels, not 160 MHz, and 3 x 3 MIMO, so they claim maximum raw bit-rates of 1.3 gigabits per second. Which is still impressive.

Maximum theoretical raw bit-rate is a fun number to talk about, but of course in the real world that will (almost) never happen. What’s more important is the useful throughput (raw bit-rate minus MAC overhead) and rate at range, the throughput you are likely to get at useful distances. This is very difficult, and it is where the manufacturers can differentiate with superior technology. For phone chips power efficiency is also an important differentiator.

Mobile Virtualization

According to Electronista, ARM’s next generation of chips for phones and tablets should start shipping in devices at the end of this year.

These chips are based on ARM’s big.LITTLE architecture. big.LITTLE chips aren’t just multi-core, they contain cores that are two different implementations of the same instruction set: a Cortex A7 and one or more Cortex A15s. The Cortex A7 has an identical instruction set to the A15, but is slower and more power efficient – ARM says it is the most power-efficient processor it has ever developed. The idea is that phones will get great battery life by mainly running on the slow, power-efficient Cortex A7, and great performance by using the A15 on the hopefully rare occasions when they need its muscle. Rare in this context is relative. Power management on modern phones involves powering up and powering down subsystems in microseconds, so a ‘rarely’ used core could still be activated several times in a single second.

The Cortex A15 and the Cortex A7 are innovative in another way, too: they are the first cores based on the ARMv7-A architecture. This is ARM’s first architecture with hardware support for virtualization.

Even without hardware support, virtualization on handsets has been around for a while; phone OEMs use it to make cheaper smartphones by running Android on the same CPU that runs the cellular baseband stack. ARM says:

Virtualization in the mobile and embedded space can enable hardware to run with less memory and fewer chips, reducing BOM costs and further increasing energy efficiency.

This application, running Android on the same core as the baseband, does not seem to have taken the market by storm. I presume because of performance. Even the advent of hardware support for virtualization may not rescue this application, since mobile chip manufacturers now scale performance by adding cores, and Moore’s law is rendering multicore chips cheap enough to put into mass-market smartphones.

So what about other applications? The ARM piece quoted above goes on to say:

Virtualization also helps to address safety and security challenges, and reduces software development and porting costs by man years.

In 2010 Red Bend Software, a company that specializes in manageability software for mobile phones, bought VirtualLogix, one of the three leading providers of virtualization software for phones (the other two are Trango, bought by VMWare in 2008 and OK Labs.)

In view of Red Bend’s market, it looks as if they acquired VirtualLogix primarily to enable enterprise IT departments to securely manage their employees’ phones. BYOD (Bring Your Own Device) is a nightmare for IT departments; historically they have kept chaos at bay by supporting only a limited number of devices and software setups. But in the era of BYOD employees demand to use a vast and ever-changing variety of devices. Virtualization enables Red Bend to add a standard corporate software load to any phone.

This way, a single phone has a split personality, and the hardware virtualization support keeps the two personalities securely insulated from each other. On the consumer side, the user downloads apps, browses websites and generally engages in risky behavior. But none of this impacts the enterprise side of the phone, which remains secure.

The Post PSTN Telco Cloud

I will be moderating a panel on this topic at ITExpo East 2012 in Miami at 3:00pm on Thursday, February 2nd.

The panelists are Brian Donaghy of Appcore, LLC, Jan Lindén of Google, Hugh Goldstein of Voxbone and Danielle Morrill of Twilio.

The pitch for the panel is:

The FCC has proposed a date of 2018 to sunset the Public Service Telephone Network (PSTN) and move the nation to an all IP network for voice services. This session will explore the emerging trends in the Telco Cloud with case studies. Learn how traditional telephone companies are adapting to compete, and new opportunities for service providers, including leveraging cloud computing and Infrastructure as a Service (IaaS) systems that are being deployed with scalable commodity hardware to deliver voice and video services including IVR, IVVR, conferencing plus Video on Demand and local CDNs.

In related news, a group of industry experts is collaborating on a plan for this transition. The draft can be found here. I volunteered as the editor for one of the chapters, so the current outline roughs out some of my opinions on this topic. This is a collaborative project, so please contact me if you can help to write it.

ITExpo: The Realities of Mobile Videoconferencing

I will be moderating a panel on this topic at ITExpo East 2012 in Miami at 1:00pm on Thursday, February 2nd.

The panelists will be Girish Khavasi of Dialogic, Trent Johnsen of Hookflash, Anatoli Levine of RADVISION and Al Balasco RadiSys. This is a heavy hitting collection of panelists. Come with your toughest questions – you will get useful, authoritative answers.

The pitch for the panel is:

As 4G mobile networks continue to be rolled out and new devices are adopted by end users, mobile video conferencing is becoming an increasingly important component in today’s Unified Communications ecosystem. The ability to deliver enterprise-grade video conferencing including high definition voice, video and data-sharing will be critical for those playing in this space. Mobile video solutions require vendors to consider a number of issues including interoperability with new and traditional communications platforms as well as mobile operating systems, user interfaces that maximize the experience, and the ability to interoperate with carrier networks. This session will explore the business-class mobile video platforms available in the market today as well as highlight some end-user experiences with these technologies.

ITExpo: The Future is Now: Mobile Callers Want Visuals with Voice over the existing network

I will be moderating a panel on this topic at ITExpo East 2012 in Miami at 2:30 pm on Wednesday, February 1st.

The panelists will be Theresa Szczurek of Radish Systems, LLC, Jim Machi of Dialogic, Niv Kagan of Surf Communications Solutions and Bogdan-George Pintea of Damaka.

The concept of visuals with voice is a compelling one, and there are numerous kinds of visual content that you may want to convey. For example, when you do a video call with FaceTime or Skype, you can switch the camera to show what you are looking at if you wish, but you can’t share your screen or photos during a call.

FaceTime, Skype and Google Talk all use the data connection for both the voice and video streams, and the streams travel over the Internet.

A different, non-IP technology for videophone service called 3G-324M, is widely used by carriers in Europe and Asia. It carries the video over the circuit-switched channel, which enables better quality (lower latency) than the data channel. An interesting application of this lets companies put their IVR menus into a visual format, so instead of having to listen through a tedious listing of options that you don’t want, you can instantly select your choice from an on-screen menu. Dialogic makes back-end equipment that makes applications like on-screen IVR possible on 3G-324M networks.

Radish Systems uses a different method to provide a similar visual IVR capability for when your carrier doesn’t support 3G-324M (none of the US carriers do). The Radish application is called Choiceview. When you make a call from your iPhone to a Choiceview-enabled IVR, you dial the call the regular way, then start the Choiceview app on your iPhone. The Choiceview IVR matches the Caller ID on the call with your phone number that you typed into the app setup, and pushes a menu to the appropriate client. So the call goes over the old circuit-switched network, while Choiceview communicates over the data network. Choiceview is strictly a client-server application. A Choiceview server can push any data to a phone, but the phone can’t send data the other way, neither can two phones exchange data via Choiceview.

So this ITExpo session will try to make sense of this mix: multiple technologies, multiple geographies and multiple use cases for visual data exchange during phone calls.

Droid Razr first look.

First impression is very good. The industrial design on this makes the iPhone look clunky. The screen is much bigger, the overall feel reeks of quality, just like the iPhone. The haptic feedback felt slightly odd at first, but I think I will like it when I get used to it.

I was disappointed when the phone failed to detect my 5GHz Wi-Fi network. This is like the iPhone, but the Samsung Galaxy S2 and Galaxy Nexus support 5 Ghz, and I had assumed parity for the Razr.

Oddly, bearing in mind its dual core processor, the Droid Razr sometimes seems sluggish compared to the iPhone 4. But the Android user interface is polished and usable, and it has a significant user interface feature that the iPhone sorely lacks: a universal ‘back’ button. The ‘back’ button, like the ‘undo’ feature in productivity apps, fits with the way people work and learn: try something, and if that doesn’t work, try something else.

The Razr camera is currently unusable for me. The first photo I took had a 4 second shutter lag. On investigation, I found that if you hold the phone still, pointed at a static scene, it takes a couple of seconds to auto-focus. If you wait patiently for this to happen, watching the screen and waiting for the focus to sharpen, then press the shutter button, there is almost no shutter lag. But if you try to ‘point and shoot’ the shutter lag can be agonizingly long – certainly long enough for a kid to dodge out of the frame. This may be fixable in software, and if so, I hope Motorola gets the fix out fast.

While playing with the phone, I found it got warm. Not uncomfortably hot, but warm enough to worry about the battery draining too fast. Investigating this, I found a wonderful power analysis display, showing which parts of the phone are consuming the most power. The display, not surprisingly, was consuming the most – 35%. But the second most, 24%, was being used by ‘Android OS’ and ‘Android System.’ As the battery expired, the phone kindly suggested that it could automatically shut things off for me when the power got low, like social network updates and GPS. It told me that this could double my battery life. Even so, battery life does not seem to be a strength of the Droid Razr. Over a few days, I observed that even when the phone was completely unused, the battery got down to 20% in 14 hours, and the vast majority of the power was spent on ‘Android OS.’

So nice as the Droid Razr is, on balance I still prefer the iPhone.

P.S. I had a nightmare activation experience – I bought the phone at Best Buy and supposedly due to a failure to communicate between the servers at Best Buy and Verizon, the phone didn’t activate on the Verizon network. After 8 hours of non-activation including an hour on the phone with Verizon customer support (30 minutes of which was the two of us waiting for Best Buy to answer their phone), I went to a local Verizon store which speedily activated the phone with a new SIM.

Deciding on the contract, I was re-stunned to rediscover that Verizon charges $20 per month for SMS. I gave this a miss since I can just use Google Voice, which costs $480 less over the life of the contract.

iPhone 4S not iPhone 5

Technically the iPhone 4S doesn’t really pull ahead of the competition: Android-based phones like the Samsung Galaxy S II.

The iPhone 4S even has some worse specifications than the iPhone 4. It is 3 grams heavier and its standby battery life is 30% less. The screen is no larger – it remains smaller than the standard set by the competition. On the other hand the user experience is improved in several ways: the phone is more responsive thanks to a faster processor; it takes better photographs; and Apple has taken yet another whack at the so-far intractable problem of usable voice control. A great benefit to Apple, though not so much to its users, is that the new Qualcomm baseband chip works for all carriers worldwide, so Apple no longer needs different innards for AT&T and Verizon (though Verizon was presumably disappointed that Apple didn’t add a chip for LTE support).

Since its revolutionary debut, the history of the iPhone has been one of evolutionary improvements, and the improvements of the iPhone 4S over the iPhone 4 are in proportion to the improvements in each of the previous generations. The 4S seems to be about consolidation, creating a phone that will work on more networks around the world, and that will remain reliably manufacturable in vast volumes. It’s a risk-averse, revenue-hungry version, as is appropriate for an incumbent leader.

The technical improvements in the iPhone 4S would have been underwhelming if it had been called the iPhone 5, but for a half-generation they are adequate. By mid-2012 several technologies will have ripened sufficiently to make a big jump.

First, Apple will have had time to move their CPU manufacturing to TSMC’s 28 nm process, yielding a major improvement in battery life from the 45 nm process of the current A5, which will be partially negated by the monstrous power of the rumored 4-core A6 design, though the Linley report cautions that it may not be all plain sailing.

Also by mid-2012 Qualcomm may have delivered a world-compatible single-chip baseband that includes LTE (aka ‘real 4G’).

But the 2012 iPhone faces a serious problem. It will continue to suffer a power, weight and thin-ness disadvantage relative to Samsung smartphones until Apple stops using LCD displays. Because they don’t require back-lighting, Super AMOLED display panels are thinner, lighter and consume less power than LCDs. Unfortunately for Apple, Samsung is the leading supplier of AMOLED displays, and Apple’s relationship with Samsung continues to deteriorate. Other LCD alternatives like Qualcomm’s Mirasol are unlikely to be mature enough to rely on by mid-2012. The mid-2012 iPhone will need a larger display, but it looks as though it will continue to be a thick, power hungry LCD.

HTML 5 takes iPhone developer support full circle

Today Rethink Wireless reported that Facebook is moving towards HTML 5 in preference to native apps on phones.

When the iPhone in arrived 2007, this was Steve Jobs’ preferred way to do third party applications:

We have been trying to come up with a solution to expand the capabilities of the iPhone so developers can write great apps for it, but keep the iPhone secure. And we’ve come up with a very. Sweet. Solution. Let me tell you about it. An innovative new way to create applications for mobile devices… it’s all based on the fact that we have the full Safari engine in the iPhone. And so you can write amazing Web 2.0 and AJAX apps that look and behave exactly like apps on the iPhone, and these apps can integrate perfectly with iPhone services. They can make a call, check email, look up a location on Gmaps… don’t worry about distribution, just put ‘em on an internet server. They’re easy to update, just update it on your server. They’re secure, and they run securely sandboxed on the iPhone. And guess what, there’s no SDK you need! You’ve got everything you need if you can write modern web apps…

But the platform and the developer community weren’t ready for it, so Apple was quickly forced to come up with an SDK for native apps, and the app store was born.

So it seems that Apple was four years early on its iPhone developer solution, and that in bowing to public pressure in 2007 to deliver an SDK, it made a ton of money that it otherwise wouldn’t have:

A web service which mirrors or enhances the experience of a downloaded app significantly weakens the control that a platform company like Apple has over its user base. This has already been seen in examples like the Financial Times newspaper’s HTML5 app, which has already outsold its former iOS native app, with no revenue cut going to Apple.