Eating my words

I am a skeptical guy. My previous posts on the iPhone balanced criticisms with cautious enthusiasm. But looking back on them, it is hard for me to remember how I felt back then. When I bought the iPhone, I expected it to follow the usual trajectory of my research purchases, use it for a while to see what I can learn, then throw it in a drawer with the rest of them, or give it to the kids to destroy.

What actually happened was that the the numerous deficiencies of the iPhone have failed to keep me from addiction. The week it came out, a colleague posted on his Skype comment line “Apple iPhone: underhyped.” I got a good laugh out of this, but he said he was serious, and now I tend to agree with him.

My biggest objection was the slow WAN data connection, and it is slow. But it’s way better than no WAN connection at all. I browse the web for reviews when I am vacillating in Fry’s; I read the news when I am waiting in lines or waiting rooms. But the absolute neatest feature is Google maps with its congestion indications on the freeways. Fire it up and look at a map of the city and you can see the jammed freeways highlit in red. Google maps is also useful on the iPhone the same way it is on the PC screen: center the map on your current location and type in “restaurants” or whatever.

The nice big screen makes email reading easy. The timer is great for steaks. I have ditched my alarm clock in favor of the iPhone. I have even started listening to the occasional podcast of my favorite radio shows…

Sometimes I catch myself engaged in rapt contemplation of its ineffable look and feel.

It is far from perfect, but it is in a different (and superior) category from any other phone.

Tango FMC for enterprises

Tango Networks was founded in 2005 and fully funded by February of 2007. It is one of several startups addressing the enterprise FMC market, integrating with the corporate PBX, but it claims a unique twist in that it also integrates closely with service provider infrastructure.

Tango has a box plugged into the MNO’s call control infrastructure talking directly to another Tango box that plugs into the corporate PBX. These boxes are named Abrazo-C (carrier) and Abrazo-E (enterprise). Abrazo is the Spanish for embrace, reinforcing the concept of the carrier side and the enterprise side being tightly connected. This balanced architecture enables Tango to offer a rich feature set while maintaining versatile.

One of the aspects of this versatility is that they aren’t fixated on dual mode phones. Tango works with any cell phone, and hands off between the corporate desk phone and the cell phone in response to the user punching in a star code on their phone keypad. This method of input also gives the user complete access to all the features of the corporate PBX over the cellular network. But Tango acknowledges that star codes are not the most user friendly of interfaces, so they do provide an “ultra thin client” for those phones that support third party software.

Requiring a box in the carrier network helps with things like caller ID manipulation and number translation (like 4 digit dialing to PBX extensions from your cell phone). On the other hand it limits Tango’s ability to sell directly to enterprises. The primary customer for all sales has to be a carrier. Marketing efforts directed to end users serve only to provide pull through.

Offering a box on the enterprise premises addresses the major concern of businesses evaluating VCC and other carrier centric FMC solutions: businesses don’t want to lose control of their voice network. By leaving the enterprise side of the system under the control of the corporate IT department, Tango resembles the PBX model of business voice more closely than the never popular Centrex model.

Google phone developer community

The genesis of the Google phone project is described in this Boston Globe article by Scott Kirsner.

The Open Handset Alliance will release its SDK on November 12th, 2007. The iPhone SDK will not be released until four months later. Microsoft and Symbian already have not only mature SDKs, but vigorous development communities: in mid-2007 Windows Mobile had 650 thousand registered developers, and Forum Nokia had 2 million registered individuals and 440 companies in its “Platinum Program.”

As usual with a new development environment, it’s a chicken and egg situation, but the chicken is coming out in pretty good shape; if the base platform debuts with comparable functionality to what the iPhone came out with, it’s a low-risk proposition for the phone OEMs, and Google’s magic coattails will ensure hysterical enthusiasm in the developer community.

Google phone alliance members

The Open Handset Alliance was announced today by Google and 30 or so other companies. Until now the highest-profile open source handset operating environment was OpenMoko.

The list of participants has no real surprises in it. Nokia isn’t on the list, most likely because this project competes head on with Symbian. This may also help to explain why Sony Ericsson isn’t a supporter yet, either. But the other three of the top five handset manufacturers are members: Motorola, Samsung and LG. All of these ship Symbian-based phones, but they also ship Windows based phones, so they are already pursuing an OS-agnostic strategy. Open standards are less helpful to a market leader than to its competitors.

Of course the other leading smartphone OS vendors are also missing from the list: Microsoft, Apple, Palm and RIM.

Ebay is there because this massively benefits Skype.

Silicon vendors retain more control of their destiny when there is a competitive software community, so it makes sense that TI is aboard even though it is the market leader in cellphone chips. Intel is another chip vendor that is a member. Intel can normally be relied on to support this type of open platform initiative, and although Intel sold its handset-related businesses in 2006, its low power CPU efforts may evolve from ultra-mobile PCs down to smartphones in a few years.

Among MNOs Verizon and AT&T Mobile are notorious for their walled-garden policies, so it makes sense that they aren’t on the list, though Sprint and T-Mobile are, which is an encouraging indication.

At the launch of the iPhone Steve Jobs said that the reason there would be no SDK for the iPhone was that AT&T didn’t want their network brought down by a rogue application. I ridiculed this excuse in an Internet Telephony column. Even so, the carriers do have a valid objection to completely open platforms: their subscribers will call them for support when the phone crashes. For this reason, applications that use sensitive APIs in Symbian must be “Symbian signed.” When he announced the iPhone SDK, Steve Jobs alluded to this as a model that Apple may follow.

So Sprint’s and T-Mobile’s participation in this initiative is very interesting. Sprint’s press release says:

Unlike other wireless carriers, Sprint allows data users to freely browse the Internet outside its portal and has done so since first offering access to the Internet on its phones in 2001.

Open Internet access is actually available from all the major US MNOs other than Verizon; AT&T ships the best handset for this, the iPhone. But the iPhone doesn’t (officially) let users load whatever software they want onto the phone. Symbian and Windows-based phones generally do, and again all the major MNOs ship handsets based on these operating systems. An open source handset goes a big step further, but who benefits depends on what parts of the source code are published, and what APIs are exposed by the proprietary parts of the system. As a rule of thumb, one would think that giving developers this greater degree of control over the system will increase their scope for innovation.

White Spaces – why the resistance?

It’s an amazing idea. Radio signals at less than 1 GHz pass easily through buildings. TV broadcast signals are below 1 GHz so you can use an indoor antenna. Anywhere in the US about half the TV channels are idle, so why not use these empty channels (White Spaces) for wireless broadband Internet access? The FCC has been pushing this idea since 2004. The IEEE has a workgroup (802.22) hammering out the technical details, and some of the mightiest companies in the technosphere are banding together to make it happen.

Even the broadcasting industry sees the merit of this idea – in a letter to Senators Stevens and Inouye, David Donovan, the president of MSTV (the Association for Maximum Service Television) says:

Ensuring that the United States is a global leader in the provision of broadband services is a worthy goal. We believe this goal can be accomplished, especially in rural markets, without causing interference to new digital television receivers and converter boxes… Our desire is to find a solution that will bring broadband to underserved Americans while ensuring that consumers’ and broadcasters’ investments in the DTV transition are protected.

Did you spot the catch? The broadcasters are worried that unlicensed use of their spectrum will interfere with their broadcasts. The chief executives of Disney, News Corporation, NBC and CBS sent a letter to the FCC saying:

As you know, current proposals based on “sensing” to avoid interference could cause permanent damage to over-the-air digital television reception.

There are two main categories of issue here: technical and compliance. Both must function correctly to avoid the outcome feared by the broadcasters.

On the technical side, if technologies can be developed that effectively eliminate the potential for interference, and regulations can be crafted that require the use of such technologies, the broadcasters have nothing to fear. This technical issue is relatively easy to debate, and while the broadcasters may seem overly cautious to some, their position is reasonable:

It has taken nearly a decade for government and industry to deploy digital television across this nation. A rush to place millions of unlicensed devices in the TV band without extensive real-world testing should not undermine these efforts.

But technical issues yield to engineers in time, and we can confidently expect cognitive radio to work properly in the end. Credible proponents argue that it is working correctly already. The FCC tested devices from Microsoft and Philips in July 2007 expecting to close this issue with hard data, but in a catastrophic blunder one of the tested devices was defective and failed the tests, leaving the issue open. The broadcast industry seized on this mistake and used it to characterize the technology in general as unripe. But the technical argument will eventually yield to conclusive experimental results, showing that cognitive radio works, and that unlicensed use of this spectrum as proposed by the FCC will not interfere with TV broadcasts.

The compliance and enforcement issue is far tougher to resolve, but it is separate from the White Spaces issue, and should be debated separately. This issue is actually more important, since it concerns not only the TV broadcast frequencies but the utility of the entire radio spectrum in the US. If devices that transmit on radio frequencies are badly engineered, defective or designed in such a way that they don’t conform to the regulations, it is possible that they might interfere with legitmate uses. As things stand, there is no guarantee that this will not happen, since the enforcement arm of the FCC is weak. Michael Marcus, in his “Spectrum Talk” blog goes into this issue and proposes some actions.

SDK for iPhone

In a message signed by Steve Jobs, Apple announced that it will release an SDK for the iPhone in February.

This means that Adrian Cockroft was right when he said that Apple simply hadn’t had time to create an SDK for the initial release of the product. This was reported in an interesting Wired post.

How open will the iPhone be with the SDK? There are two kinds of open-ness associated with the iPhone, first the ability to load applications into the phone’s execution environment and run them, and second the ability for the phone to work on any GSM network (network unlocking).

The announcement says:

We want native third party applications on the iPhone, and we plan to have an SDK in developers’ hands in February. We are excited about creating a vibrant third party developer community around the iPhone and enabling hundreds of new applications for our users.

To reduce the risk of malware, Apple plans to require digital signing of some kind for the applications; this is a great idea provided that the process to get the signature isn’t too arduous.

It will be interesting to see how much of the system is exposed through the SDK. Nokia’s Symbian environment lets third parties take control of the telephone UI, so that they can implement handset clients for FMC. If the iPhone SDK provides the hooks to do this the iPhone would become useful in a dual-mode enterprise environment. But it is unlikely that the iPhone will soon be as enterprise-friendly as the Nokia ESeries phones, which have OMA-DM, and just-announced “Freeway” connection mangement.

As for the network unlocking, Apple is rumored to share in the service revenue stream that AT&T gleans from the iPhone, and is also rumored to have similar arrangements with its European network partners. If you could buy an iPhone and activate it on any network, Apple would miss some of this revenue. This means Apple is motivated to make sure that every iPhone sold is tied to a service plan from which it gets revenue. But once that activation has occurred, and the customer has committed to a long term service agreement, both AT&T and Apple will get monthly service revenues whether the phone is used on that network or not. On the other hand, Apple will be shipping an unlocked phone in France, since French law limits locking of phones to networks. Whether this will have any effect on unlocking policy in other countries is to be seen. Unlocked French iPhones will presumably flood eBay as soon as they are released, and class action suits in the USA may force AT&T to unlock iPhones on demand, or within 90 days of purchase (as they do other phones) or at the end of the service agreement (two years).

Intel’s Primary Wireless Campus

Intel published a white paper last year about a trial deployment of 802.11a as a replacement for wired Ethernet at a 5,000 person campus. The results were lower costs and happier workers. This was just for PC connectivity. The dual-mode phone phase of the deployment is still to come.

There are several interesting findings in the white paper. First, while the latency of the network increased somewhat, the difference was imperceptible to the users. Second, Intel chose to abandon the VPN, relying on 802.11i for security. This made joining the network faster and easier.

The decision to use 802.11a was presumably for the greater capacity (more non-interfering channels than 11g), and for the cleaner spectrum. 802.11n is superior to 802.11a in capacity and rate at range. This means that what was doable with 11a will be even easier with 11n.

Reliable VoIP

QoS metrics are important, and several companies have products that measure packet loss, jitter, latency and so on. But you can have perfect QoS, and your VoIP system can still be defective for all sorts of reasons.

I spoke with Gurmeet Lamba, VP of Engineering, at Clarus Systems at the Internet Telephony Expo this week. He said that even if a VoIP system is perfectly configured on installation, it can decay over time to the point of unusability. Routers go down and are brought up again with minor misconfigurations; moves, adds and changes accumulate bad settings and policy violations.

VoIP systems are rarely configured perfectly even on installation. For example, IP phones have built-in switches so you can plug your PC into your desk phone. Those ports are unlocked by default. But some phones are installed in public areas like lobbies. It’s easy for installers to forget to lock those ports, so anybody sitting in the lobby can plug their laptop into the LAN. There are numerous common errors of this kind. Clarus has an interesting product that actively and passively tests for them; it monitors policy compliance and triggers alarms on policy violations.

Clarus uses CTI to do active testing of your VoIP system, looking for badly configured devices and network bottlenecks. Currently it works only on Cisco voice networks, but Clarus plans to support other manufacturers.

Clarus started out focusing on automated testing of latency, jitter and packet loss for IP phone systems. It went on to add help desk support with remote control of handsets, and the ability to roll back phone settings to known good configurations.

The next step was to add “Business Information,” certifying deployment configurations, and helping to manage ongoing operations with change management and vulnerability reports. Clarus’ most recent announcement added passive monitoring based on a policy-based rules engine.

Clarus claims to have tested over 350 thousand endpoints to date. It has partners that offer network monitoring services.

T-Mobile UMA service details

I called the T-Mobile customer service line and received a clarification about the HotSpot@Home service. The charge for this service is just for unlimited calling at home and at T-Mobile hotspots. If you don’t mind using your regular service minutes in these situations, there is no need to subscribe to the @Home service – you can still use the Wi-Fi connection for better reception. So since I never seem to use more than half my minutes, I cancelled the @Home service. The phone still uses Wi-Fi when it can, so the customer service agent appears to be correct.

This makes me a lot happier with T-Mobile. When the Wi-Fi is being used to offload their network and provide better coverage, they don’t charge for it. This is as it should be. If the offload/coverage effect turns out to be a significant benefit for T-Mobile, and as the price of Wi-Fi in handsets comes down, it is conceivable that T-Mobile will find it worthwhile to add Wi-Fi to all their phones.

I remain curious about why my Nokia 6086 can’t use the Wi-Fi for web browsing.

How does 802.11n get to 600Mbps?

802.11n incorporates all earlier amendments to 802.11, including the MAC enhancements in 802.11e for QoS and power savings.

The design goal of the 802.11n amendment is “HT” for High Throughput. The throughput it claims is high indeed: up to 600 Mbps in raw bit-rate. Let’s start with the maximum throughput of 802.11g (54 Mbps), and see what techniques 802.11n applies to boost it to 600 Mbps:

1. More subcarriers: 802.11g has 48 OFDM data subcarriers. 802.11n increases this number to 52, thereby boosting throughput from 54Mbps to 58.5 Mbps.

2. FEC: 802.11g has a maximum FEC (Forward Error Correction) coding rate of 3/4. 802.11n squeezes some redundancy out of this with a 5/6 coding rate, boosting the link rate from 58.5 Mbps to 65 Mbps.

3. Guard Interval: 802.11a has Guard Interval between transmissions of 800ns. 802.11n has an option to reduce this to 400ns, which boosts the throughput from 65 Mbps to 72.2 Mbps.

4. MIMO: thanks to the magical effect of spatial multiplexing, provided there are sufficient multi-path reflections, the throughput of a system goes up linearly with each extra antenna at both ends. Two antennas at each end double the throughput, three antennas at each end triple it, and four quadruple it. The maximum number of antennas in the receive and transmit arrays specified by 802.11n is four. This allows four simultaneous 72.2 Mbps streams, yielding a total throughput of 288.9 Mbps.

5. 40 MHz channels: all previous versions of 802.11 have a channel bandwidth of 20MHz. 802.11n has an optional mode (controversial and not usable in many circumstances) where the channel bandwidth is 40 MHz. While the channel bandwidth is doubled, the number of data subcarriers is slightly more than doubled, going from 52 to 108. This yields a total channel throughput of 150 Mbps. So again combining four channels with MIMO, we get 600 Mbps.

Lower MAC overhead
But raw throughput is not a very informative number.

The 11a/g link rate is 54 Mbps, but the higher layer throughput is only 26 Mbps; the MAC overhead is over 50%! In 11n when the link rate is 65 Mbps, the higher layer throughput is about 50 Mbps; the MAC overhead is down to 25%.

Bear mind that these numbers are the absolute top speed you can get out of the system. 802.11n has numerous modulation schemes to fall back to when the conditions are less than perfect, which is most of the time.

But to minimize these fall-backs, 11n contains additional improvements to make the effective throughput as high as possible under all circumstances. These improvements are described in the following paragraphs.

Fast MCS feedback – rate selection.
Existing equipment finds it hard to track rapid changes in the channel. Say you walk through the shadow of a pole in the building. The rate may go from 50 to 6 to 50 mbps in one step. It’s hard for conventional systems to track this, because they adapt based on transmit errors. With delay sensitive data like voice you have to be very conservative, so adapting up is much slower than down. 11n adds explicit per-packet feedback, recommending the transmission speed for the next packet. This is called Fast MCS (Modulation and Coding Scheme) Feedback.

LDPC (Low Density Partity Check) coding
LDPC is a super duper Forward Error Correction mechanism. Although it is almost 50 years old, it is the most effective error correcting code developed to date; it nears the theoretical limit of efficiency. It was little used until recently because of its high compute requirement. An interesting by-product of its antiquity is that it is relatively free of patent issues.

Transmit beam-forming
The term beam-forming conjures up images of a laser-like beam of radio waves pointing exactly at the client device, but it doesn’t really work like that. If you look at a fine-resolution map of signal intensity in a room covered by a Wi-Fi access point, it looks like the surface of a pond disturbed by a gust of wind – it is a patchwork of bumps and dips in signal intensity, some as small as a few cubic inches in volume. Transmit beam-forming adjusts the phase and transmit power at the various antennas to move one of the maxima of signal intensity to where the client device is.

STBC
In a phone the chances are that there will only be one Wi-Fi antenna, so there will be only one spatial channel. Even so, the MIMO technique of STBC (Space-Time Block Coding) enables the handset to take advantage of the multiple antennas on the Access Point to improve range, both rate-at-range and limiting range.

Incidentally, to receive 802.11n certification by the Wi-Fi Alliance, all devices must have two or more antennas except handsets which can optionally have a single antenna. Several considerations went into allowing this concession to handsets, mainly size and power constraints. STBC is particularly useful to handsets. It yields the robustness of MIMO without a second radio, which saves all the power the second radio would burn. This power saving is compounded with another: because of the greater rate-at-range the radio is on for less time while transmitting a given quantity of data. STBC is optional in 802.11n, though it should always be implemented for systems that support 802.11n handsets.

Hardware assistance
Many of these features impose a considerable compute load. LDPC and STBC fall into this category. This is an issue for handsets, since computation costs battery life. Fortunately these features are amenable to hardware implementation. With dedicated hardware the computation happens rapidly and with little cost in power.