GIPS webinar on Wideband Voice

GIPS yesterday sponsored a webinar on HD voice in the teleconferencing and videoconferencing industries. It was introduced by Elliot Gold of Telespan Publishing. Elliot made the point that HD video will be running at 90% of video equipment sales by the end of this year, and that HD audio is at least equally important to the user experience and should be the next technology transformation in the equipment and services markets.

Dovid Coplon of GIPS gave a more technical presentation, starting with a very accessible explanation of the benefits of HD audio.

Dovid made some interesting observations. He named some users of GIPS wideband technology including Google Talk and Yahoo Messenger. He cited a May 2009 poll that showed 10% of respondents used HD audio all the time or whenever they could. To me this is a surprisingly high number, possibly reflecting a biased poll sample, or perhaps a large number of respondents that didn’t understand the question.

Dovid compared the benefits of the GIPS wideband codecs, iSAC and iPCM-WB. iSAC is a lower-bit-rate, higher complexity codec than IPCM-WB. Dovid pointed out that with IP packet overhead being so high, decreasing the bit-rate of a codec is not as useful as one might think. The implication was that the lower complexity of IPCM-WB outweighed its bandwidth disadvantage in mobile applications. He also included a codec quality comparison chart based on a MUSHRA scale. The chart predictably showed iSAC way better than any of the others, and anomalously showed Speex at 24 kbps as inferior to Speex at 16 kbps.

Dovid also echoed all the codec engineers I have talked with in emphasizing that the codec is a small piece of the audio quality puzzle, and that network and physical acoustic impairments can have a greater effect on user experience.

You can download the slides here.

Interview with Jean-Marc Valin of Speex

I have written before about the appeal of wideband codecs, and the damping effect that royalty issues have on them. Speex is an interesting alternative wideband codec: open source and royalty free. Having discussed the new Skype codec with Jonathan Christensen earlier this year I thought it would be interesting to hear from the creator of Speex, Jean-Marc Valin.
MS: What got you into codecs?
JMV: I did a master’s in Sherbrooke lab – the same lab that did G.729 and all that. I did speech enhancement rather than codecs, but learned a bit about codecs there and after I did my master’s I thought it would be nice to have a codec that everybody could use, especially on Linux. All those proprietary codecs were completely unavailable on Linux because of patent issues, so I thought it would be something nice to do. I met a guy named David Rowe who thought the same thing and knew more about codecs than I did, so we started Speex together. In the end he didn’t have much time to write code but I did and he had great advice and feedback.
MS: How much of Speex did you write, and how much was contributed by others?
JMV: I wrote about 90%, but most of the contributions were not in code but in terms of bug reports, feedback, suggestions, or in the early beginning David Rowe didn’t write much code but he gave me really good advice. So a lot was contributed, but not a lot of the contributions were code. The port to windows was contributed.
MS: Were there any radical innovations in algorithms that a contributor came up with?
JMV: No, I don’t think there’s an issue of that. And even what I wrote it was mostly just a matter of putting together building blocks that were generally known, and just putting together so that a decent codec resulted. There’s nothing in Speex that somebody would look at and say “Wow, this is completely unheard of.” There are a few features that aren’t in other codecs, but they’re not fundamental breakthroughs or anything like that.
MS: How is Speex IPR-free? Do you just study the patents and figure out work-arounds or do you just assume that if you write code from scratch it’s not infringing, or do you look at patents for speech technologies that have already expired…
JMV: It’s actually a mixture of all that. Basically the first thing with Speex is that I wasn’t trying to innovate, especially in the technological sense. A lot of Speex is built on really old technology that either wasn’t patented or if it was the patents had expired. A lot of 80’s technologies.
CELP is 80’s technology. CELP was not patented. There are developments of it like ACELP which was patented – actually by my former university, so although its actually a pretty nice technique I couldn’t use it so I just avoided it and used something else, which turned out to be not that much worse, and in the end it didn’t really matter – it was just a bit of an inconvenience.
MS: Are the users like Adobe calling you to verify that Speex is IPR free?
JMV: I had a few short contacts with them. I didn’t speak with any lawyers, so I assume somebody had a look at it and decided that it was safe enough to use. It’s a fundamental problem with patents, in any case, regardless of whether you’re open source royalty free or proprietary, patented or anything like that. Anyone can claim they have a patent on whatever you do. At some point it’s a calculated risk, and Speex is not more risky than any other technology. Even if you license the patents you never know who else might claim they have a patent on the thing.
MS: Has anybody tried that with Speex?
JMV: No.
MS: How long has Speex been in use?
JMV: I started Speex in Feb. 02 and v1.1 was released in March 03 at which point the bit stream was frozen. All codecs have to freeze the bit stream at some point. All the G.72x ITU codecs have a development phase, then they agree on the codec and they say “this is the bit stream and it’s frozen in stone,” because you don’t want people changing the definition of the codec because nobody would be able to talk to each other.
MS: But you can change the implementations of the algorithms that generate the bit stream?
JMV: Most of the ITU codecs have a so-called “bit-exact” definition, which means a certain bit pattern as audio input has to produce exactly this pattern as the compressed version. This leaves a lot less room for optimization.
MS: Does Speex have a bit exact definition?
JMV: No. The decoder is defined, so the bit stream itself is defined, but there is no bit-exact definition, and there can’t really be because there is a floating point version and you can’t do bit exact with floating point anyway.
In that sense it’s more similar to the MPEG codecs that are also not bit-exact.
After the bit stream was frozen I spent quite a lot of time doing a fixed point port of Speex so it could run on ARM and other processors that don’t have floating point support. I also spent some time doing quality optimizations that didn’t involve changing the bit stream. There are still a lot of things you can do in terms of improving the encoder to produce a better combination of bits.
MS: So the decoder doesn’t change, but the encoder can be improved and that will give you a better end result?
JMV: Exactly. That’s what happened for example with MP3, where the first encoders were really, really bad. And over time they improved, and the current encoders are much better than the older ones.
MS: Have you optimized Speex for particular instruction sets?
JMV: There are a few optimizations that have been done in assembly code for ARM. Mostly for the ARM4 architecture there’s a tiny bit of work that I did several years ago to use the DSP instructions where available.
MS: How much attention have you paid to tweaking Speex for a particular platform, like for example a particular notebook computer?
JMV: Oh, no, no. First, all of that is completely independent of the actual codec. In the case of Speex I have in the same package I have the Speex codec and a lot of helper functions, echo cancellation, noise suppression and things like that. Those are completely independent of the codec. You could apply them to another codec or you could use Speex with another echo canceller. It’s completely interchangeable, and there are not really any laptop specific things. The only distinction between echo cancellers is between acoustic echo cancellers and line echo cancellers, which are usually completely different. The acoustic echo will be used mostly in VoIP when you have speakers and microphones instead of headsets. What really changes in terms of acoustic echo is not really from one laptop to another but from one room to another because you are canceling the echo from the whole room acoustics.
MS: Isn’t there a direct coupling between the speaker and the microphone?
JMV: Overall what you need to cancel is not just the path from the mic to the speakers. Even with the same laptop the model will change depending on all kinds of conditions. There’s the direct path which you need to cancel, but there’s also all the paths that go through all the walls in your room. Line echo cancellers only have a few tens of milliseconds, whereas acoustic echo cancellers need to cancel over more than 100 milliseconds and cancel all kinds of reflections and things like that.
Even if you are in front of your laptop and you just move slightly the path changes and you have to adjust for that.
MS: So who did the echo cancellers in the Speex package – was that you?
JMV: Yes.
MS: G.711 has an annex that includes PLC (Packet Loss Concealment), and others say PLC is a part of their codec.
JMV: The PLC is tied to the codec in the case of Speex and pretty much all relatively advanced codecs. G.711 is pretty old and all packets are completely independent, so you can do pretty much anything you like for concealment. For Speex or any other CELP codec you need to tie the PLC to the codec.
MS: As far as wideband is concerned, how wideband is Speex? What are the available sample rates?
JMV: Wideband was part of the original idea of Speex. I didn’t even think about writing a narrowband version of it. And in the end some people convinced me that narrowband was still useful so I did it. But it was always meant to be wideband. The way it turned out to be done was in an embedded way, which means that if you take a wideband Speex stream it is made up of narrowband stream and extra information for the higher frequencies. That makes it pretty easy to interoperate with narrowband systems. For instance if you have a wideband stream and you want to convert it to the PSTN you just remove the bits that correspond to the higher frequencies and you have something narrowband. This is for 16 kHz. For higher frequencies, Speex will support also a 32 kHz mode – I wouldn’t say that it’s that great, and that’s one of the reasons I wrote another codec which is called CELT (pronounced selt).
MS: What about the minimum packet size you have for Speex?
JMV: Packetization for Speex is in multiples of 20 ms. The total delay is slightly more than that – around 10 ms, so the total delay introduced by the codec is around 30 ms, which is similar to the other CELP based codecs.
MS: is Speex a variable bit rate codec?
JMV: It has both modes. In most VoIP applications people want to use constant bit rate because they know what their link can operate at. In some cases you can use VBR, that’s an option that Speex supports.
VBR will reduce the average bandwidth so if you have hundreds of conversations going through the same link, then at the same quality VBR will take of the order of 20% less bandwidth, or something like that. I don’t remember the exact figures.
A conversation can go above the average bit rate just as easily as it can go below.
MS: Can you put a ceiling on it, suppose you specify a variable bit rate not to exceed 40 kbps?
JMV: Yes, that’s a supported option. It would sound slightly worse than a constant bit rate of 40 kbps. There’s always the trade-off of bit rate and quality. I believe some people already do it in the Asterisk PBX, but I could be wrong on that one.
MS: How does Speex compare to other codecs on MIPS requirement?
JMV: I haven’t done precise comparisons, but I can say that in terms of computational complexity Speex narrowband is comparable to G.729 (not G.729A, which is less complex) and AMR-NB and Speex wideband is comparable to AMR-WB. The actual performance on a particular architecture may vary depending on how much optimization has been done. In most applications I’ve seen, the complexity of Speex is not a problem.
MS: So what about AMR-WB? Seems like it’s laden with IP encumbrances? What are the innovations in that that make it really good, and do you think it’s better than Speex or does Speex have alternative means of getting the same performance?
JMV: I never did a complete listening test comparing Speex to AMR-WB. To me Speex sounds more natural, but I’m the author, so possibly someone would disagree with me on that. In any case there wouldn’t b a huge difference of one being much better than the other. The techniques are pretty different, AMR-WB uses ACELP. Both are fundamentally CELP but they do it very differently.
MS: The textbooks say CELP models the human voice tract. What does that mean?
JMV: It’s not really modeling; it’s making many assumptions that are sort of true if the signal is actually voice. Basically the LP part of CELP is Linear Prediction, and that is a model that would perfectly fit the vocal tract if we didn’t have a nose. The rest has to do with modeling the pitch, which is very efficient, assuming the signal has a pitch, which is not true of music, for instance. The Code Excited part is mostly about vector quantization, which is an efficient way of coding signals in general.
The whole thing all put together makes it pretty efficient for voice.
MS: What is the biggest design win that you know of for Speex?
JMV: There are a couple of high profile companies that use Speex. Recently the one that people talked about was Flash version 10. Google Talk is using it as well.
MS: Do you track at all how many people are using it in terms of which applications are using it?
JMV: In some cases I hear about this company using Speex, or that company tells me they are using it or they ask me a few questions so they can use it. So I have a vague idea of a few companies using it, but I don’t really track them or even have a way to track them because a part of the idea of being open source is that anyone can use it with very few restrictions, and with no restrictions from me on having to get a license or anything like that.
MS: How many endpoints are running Speex now?
JMV: It’s pretty much impossible to say. There are a large number of video games that use Speex. It’s very popular in that market because it’s free.
MS: Would gamers want to use CELT instead? That’s a very delay-sensitive environment.
JMV: I think it depends on the bandwidth. I was involved in one of the first games that used it was unreal tournament in 2004 and they were using a very low bit rate, so CELT wouldn’t have worked. Now the bandwidths are larger, so possibly someone will want to use CELT at some point.
MS: What is CELT?
JMV: It’s an acronym for Constrained Energy Lap Transform. It actually works pretty equally either on voice or music. The bandwidth is a bit more than Speex. Speex in wideband mode will usually take about 30 kbps at 16 kHz, whereas with CELT usually you want to use at least 40 kbps. At 40 kbps you have pretty decent voice quality, at full audio bandwidth, 44 or 48 kHz. This is the CD sample rate or slightly higher, 48 which is another widely used rate. For those sample rates which basically give you the entire audio spectrum in terms of bit rate usually you need at least 40 for voice. You can go a bit lower but not much. If you use 48 you get decent music quality and at 64 you get pretty good music quality.
MS: Is CELT a replacement for Speex?
JMV: No, there is definitely a place for both of them. There’s actually very little competition between them. usually people either want the lower rate of Speex; for instance if you want something that works at 20 kbps, you use Speex, and CELT is for higher bit rate, lower delays, and also supports music, so there’s nearly no overlap between the two.
MS: How does CELT compare to MP3?
JMV: I actually did some tests with an older version, and in terms of quality alone it was already better than MP3 which was originally quite surprising to me, because my original goal was not to beat MP3 but to make the delay much lower, because you can’t use MP3 for any kind of real-time communication, because the delay will be way more than 100 ms. CELT is designed for delays that are lower than 10 ms.
MS: Wow! So how many milliseconds are in a packet?
JMV: It is configurable. You can use CELT with packets as small as around 2 ms. or you can use packets that are up to 10. The default I recommend is around 5 ms.
MS: So the IP overhead must be astronomical! 2 ms at 64 kbps is 16 bytes per packet!
JMV: In normal operation you wouldn’t use the 2 ms mode, but I wanted to enable real-time music collaboration over the network. So you can have two people on the same continent that play music together in real time over the net. This is something you can’t do with any other codec that exists today.
MS: So the Internet delay is going to be at least 70ms.
JMV: It depends. Overall you need to have less than 25 ms one – way delay for the communication to work. That’s the total delay. So if you look at existing codecs, even so-called low-delay AAC already has at least 20 ms of packetization delay. So if you add the network and anything else you can’t make it. Codecs such as AMR-WB will have 25 or 30 ms for packetization, so you have already busted right there. It won’t work for music. So this is one reason why I wrote CELT.
MS: Have you played music over the Internet with CELT yet?
JMV: I haven’t tried it yet but some other people have tried it and reported pretty good results.
The other goal, even if you are not trying to play music over the network, it has to do with AEC which although some software does it relatively well it’s still not a completely solved problem. If you are able to have the delay low enough you almost don’t have to care about echo cancellation at all because the feedback is so quick that you don’t even notice the echo. Just like when you speak on the phone you hear your own voice in the head set and it doesn’t bother you because it’s instantaneous.
MS: Has anybody done any research on how long the delay can get before it begins to become disorienting?
JMV: There’s is some research, usually for a particular application and it will depend on the amount of echo and all of that but usually if you manage to get it below around 50 ms for the round trip usually it won’t be a problem, and the lower the delay the less annoying the echo is, even if you don’t cancel it.
MS: What phone are you talking on now?
JMV: My company phone. At home I’m set up with my cable provider.
MS: So you don’t use Speex yourself?
JMV: I used it when I was in Australia and I wanted to talk with my family back here. But I’m not using it in any regular fashion now.
MS: So what software did you use with the webcam?
JMV: At the time I was using Ekiga and OpenWengo. Both are Linux clients, because I don’t run Windows on my machines. Open Wengo is one of the few on Linux that can talk to a Windows machine.
MS: Have you ever used Skype?
JMV: Once or twice, but not regularly.
MS: What kind of cell phone do you have?
JMV: A really basic cell phone I think I have sent maybe one or two SMS messages in my life and that’s the most complicated I have ever done with that phone, which I use mainly in case of emergencies. I am not a telecommunication power user or anything like that.
MS: I am thinking that cell phones are how wideband codecs are going to take off. I’m not talking about AMR-WB. There are going to be hundreds of millions of smart phones sold, over the next few years that have Wi-Fi in them. And because you will be able to do VoIP from a platform that you can load applications onto, it seems like a Wi-Fi voice client for smart phones is going to be a way that wideband audio can really infiltrate and take off. I’m thinking that that might be a way for you to start using Speex in your daily life.
JMV: Well, I sure hope that Speex will take off a lot more in that area. Originally it wasn’t planned to go that far. Originally the only market I had in mind was the Linux or open source market with PC based soft phones. That’s the only thing I cared about. It was designed mainly for IP networks as opposed to wireless, and I just wanted to see far it would go. It turned out to be a lot further than I expected.
Porting to Windows was done pretty early in the game. That was a contribution – I have never actually compiled it for Windows. And eventually people started porting it to all sorts of devices I have never heard of like embedded DSPs and lots of different architectures.
MS: I must say, thank you very much. I feel that wideband audio is a great benefit to the telephone world, and will undoubtedly become very common over time, and one of the biggest impediments to wideband audio is the intellectual property issue, so having an open source, IPR-free implementation of a wideband codec that seems to be a good one is just a great thing for the world, and a wonderful thing you have done for the world.
JMV: I think wideband will be pretty important, especially for voice over IP because it’s basically the only way that VoIP can ever say that it’s better than the PSTN. As long as people stay with narrowband the best VoIP can be is “almost as good as PSTN.” And yes, IPR is a pretty serious issue there.

Open up Skype?

Skype is the gorilla of HD Voice. Looking at my Skype client I see that there are at this moment about 16 million people enjoying the wideband audio experience on Skype. The other main type of Voice over IP, SIP, is rarely used for HD Voice conversations, though I wrote an HD Voice Cookbook to help to popularize wideband codecs on SIP. Since Skype has the largest base of wideband codec users, those who are enthusiasts of both HD Voice and SIP are eager for SIP networks to interoperate with Skype, allowing all HD-capable endpoints to talk HD to each other. Skype does already kind of interoperate with SIP, but only through the PSTN, which reduces the wideband media stream to narrowband. Opening up Skype would solve this problem, so it’s obviously a good idea. What is not so clear, however, is what it means to “open up Skype.”

Skype reinvented Voice over IP, and did it better than SIP. SIP was originally intended to be a lightweight way to set up real-time communications session. It was the Internet Engineering Task Force’s response to the complexities of the ITU VoIP standard, H.323. But SIP got hijacked by the telephone industry, and recast into the familiar mold of proliferating standards and proprietary implementations. SIP is no longer lightweight, implementation is a challenge and only the basic features are easily interoperable.

Take a look at my HD Voice Cookbook to see what it takes to set up a typical SIP phone, then compare this to installing Skype on your PC. Or compare it to the simplicity of plugging in a POTS phone to your wall socket. So we have:

  • Skype, free video calls with HD voice from your PC to anywhere in the world;
  • POTS, narrowband voice-only calls that cost about $30 per month plus per-minute charges for international calls; or
  • SIP, that falls somewhere in between the two but which is way too complex for consumers to set up, and which people only really use for narrowband because everybody else only uses it for narrowband, so there’s no network effect.

Open VoIP standards got a several-year start on Skype, starting with H.323 and going on to SIP; but from its inception Skype blew them out of the water. To be sure it had a strong hype amplifier since P2P file sharing was controversial at that time, and Skype came from the same people as Kazaa, but at that time NetMeeting (an H.323 VoIP program) had an enormous installed base, since it came as part of Windows. The problem Skype solved was ease of use.

Skype doesn’t just give you video and wideband voice. It’s all encrypted and you get all sorts of bonus features like conferencing, presence, chat, desktop sharing, NAT traversal and dial-by-name. And did I mention it’s free?

The open standards VoIP community was beaten fair and square by Skype, blowing a several year start in the process.

Let me clarify that. In terms of minutes of voice traffic on network backbones, SIP traffic outweighs Skype, so from that point of view, SIP is not so beaten by Skype. The sense in which Skype has trounced the open standards VoIP community is in providing users with something better and cheaper than the decades-old PSTN experience, which carrier VoIP merely strives to emulate at a marginally lower price.

So it seems to me like sour grapes to clamor for Skype to make technical changes to conform to open standards, especially if those changes would impair some of the benefits that Skype offers users. How would users benefit from opening up Skype? Would the competition lower the cost of a Skype call? It’s hard to see how, when Skype calls are free. Would the service be more accessible, or accessible to more customers? No, because anybody with a browser can download Skype free by typing “Skype” or even “Skipe” into their browser’s search field. Would the open standards community innovate faster than Skype, and provide more and better features? Not based on the their respective track records. The open standards community has had plenty of time to out-innovate Skype and manifestly failed.

Anyway, what are the senses in which Skype is not open? It is certainly interoperable with the PSTN; SkypeIn and SkypeOut are among the cheapest ways to make calls on the PSTN. Actually, this may be the greatest threat to Skype’s innovation. SkypeIn and SkypeOut are the only way that Skype makes money; this is a powerful motivation for Skype to not incent users to abandon them. If this remains the only economic force acting on the company Skype is likely to decay into an old-style regular phone service provider.

After a lot of debate with people who know about these things, there seem to be two main ways in which Skype could be said to be not open:

  1. The protocol is proprietary and not published, so third parties can’t implement endpoints that interoperate with Skype endpoints.
  2. Only Skype can issue Skype addresses, and Skype controls the directories rather than using DNS like SIP.

Let’s look at the issue of the proprietary protocol first. Let’s break it into two parts, first who defines the protocols and second, their secrecy. In the debate between the cathedral and the bazaar, the cathedral has recently been losing out to the bazaar amongst the theorizers. We see the success of Apache, MySQL, Linux and Firefox and it looks as though the cathedral is being routed in the marketplace, too. But on the other hand we have successful companies like Apple, Google, Intel and Skype, whose success demonstrates that a design monopoly can often deliver a more elegant and tight user experience. There is no Linus Torvalds of SIP. Having taken the decision to implement a protocol other than SIP, it seems fine to me that whoever invented the Skype protocol should continue to design it, especially since they have manifestly done a much better job than the designers of SIP – ‘better’ in the sense of being more appealing to users.

What about the secrecy? A while back one of the original designers of SIP, Henning Schulzrinne, with his colleague Salman Baset, reverse engineered the Skype network and published his findings here. There is more technical background on Skype here. According to Baset and Schulzrinne:

Login is perhaps the most critical function to the Skype operation. It is during this process a Skype client authenticates its user name and password with the login server, advertises its presence to other peers and its buddies, determines the type of NAT and firewall it is behind, discovers online Skype nodes with public IP addresses, and checks the availability of latest Skype version.

Opening up the protocol to let other people use it would enable them to implement their own Skype login servers. This would enable a parallel network, but in the absence of a new protocol that enabled the login servers to exchange information, it would not lead to interoperability, in the sense of users on Skype being able to view the presence information of users on the parallel network, or even retrieve their IP address to make a call. So it would have the effect of fragmenting the Skype network, rather than opening it. Alternatively the Skype login servers could implement the SIP protocol to exchange presence information. But then it would start to be a SIP network, not a Skype network. And the market numbers say that users find SIP inferior to Skype. So why do it?

Opening up the protocol to let other people write Skype clients that logged into the Skype login servers would open up the network, but at the risk of introducing interoperability issues due to faulty interpretations of the specification. Network protocols are notoriously prone to this kind of problem. But guaranteed interoperability of the clients is one of the primary benefits of Skype over SIP from the point of view of the user, who would therefore not benefit from this step.

So why not have Skype distribute binaries that expose to third party applications the functionality of the protocols and the ability to log into the Skype login server through a published API? Wait a sec – they already do that.

Another objection to Skype publishing the protocols for third parties to implement is that there would be a danger of the third parties implementing some parts of the protocol but not others. For example not the encryption part, or not the parts that enable clients to be super-nodes or relays. A proliferation of this kind of free-rider would stress the network, making it more prone to failure.

Related to the issue of who implements the login servers is who issues Skype addresses. There is a central authority for issuing phone numbers (the ITU), and a central authority for issuing IP addresses (the IANA). But in both cases, the address space is hierarchical, allowing the central authority to delegate blocks of addresses to third party issuers. The Skype address space is not hierarchical, so it would require some kind of reworking to enable delegation. Alternatively the Skype login servers could accept logins from anybody with a SIP address. But there would be no guarantee that the client logging in was interoperable.

Scanning back through this posting, I see that my arguments could be parodied as “you can’t argue with success,” and “if it ain’t broke don’t fix it.” Arguments of this type are normally weak, so in this case I think my points are actually “there are reasons for Skype’s success,” “fixes could break it,” and “users would be better served if Skype competitors concentrated on seducing them with a superior offering,” the last of which, after all, is how Skype has won its users away from the traditional telecom industry. Some people are trying this approach, notably Gizmo5, which I plan to write about later.

HD Voice Cookbook

One of the themes of this blog is how phone conversations can sound much better in VoIP because of wideband codecs. If you have a corporate IT department and a new PBX from a company like Cisco, Avaya, Nortel, Siemens or Alcatel-Lucent, the phones can normally can be configured to use the (wideband) G.722 codec on internal calls. And if you use Skype on your PC, it normally runs with a wideband codec, unless you make a SkypeOut call to a regular phone number.

But what if you are working out of a home office, and you just want your desk phone to sound good, and to use a wideband codec when calling other phones with wideband capabilities? Unfortunately its still a project that can require some technical skills and a lot of time. To make it easier for you, here’s a cookbook explaining step by step how I did it for a particular implementation (Polycom IP650 phone using an account at OnSIP).

AT&T to deploy Voice over Wi-Fi on iPhones

Don’t get too excited by Apple’s announcement of a Voice over IP service on the iPhone 3.0. It strains credulity that AT&T would open up the iPhone to work on third party VoIP networks, so presumably the iPhone’s VoIP service will be locked down to AT&T.

AT&T has a large network of Wi-Fi hotspots where iPhone users can get free Wi-Fi service. The iPhone VoIP announcement indicates that AT&T may be rolling out voice over Wi-Fi service for the iPhone. It will probably be SIP, rather than UMA, the technology that T-Mobile uses for this type of service. It is likely to be based on some flavor of IMS, especially since AT&T has recently been rumored to be spinning up its IMS efforts for its U-verse service, which happens to include VoIP. AT&T is talking about a June launch.

An advantage of the SIP flavor of Voice over Wi-Fi is that unlike UMA it can theoretically negotiate any codec, allowing HD Voice conversations between subscribers when they are both on Wi-Fi; wouldn’t that be great? The reference to the “Voice over IP service” in the announcement is too cryptic to determine what’s involved. It may not even include seamless roaming of a call between the cellular and Wi-Fi networks (VCC).

AT&T has several Wi-Fi smartphones in addition to the iPhone. They are mostly based on Windows Mobile, so they can probably be enabled for this service with a software download. The same goes for Blackberries. Actually, RIM may be ahead of the game, since it already has FMC products in the field with T-Mobile, albeit on UMA rather than SIP, while Windows Mobile phones are generally ill-suited to VoIP.

Skype’s SILK codec available royalty free to third parties

I wrote earlier about the need for royalty-free wideband codecs, and about a conversation with Jonathan Christensen about SILK, Skype’s new super-wideband codec.

This week Jonathan announced that Skype is releasing an SDK to let third parties integrate SILK with their products, and distribute it royalty free. This is very good news. It comes on top of Skype’s announcement that Nokia is putting the Skype client on some of its high end phones. If the Nokia deal includes SILK, and the platform exposes SILK to third party applications on the phones, SILK will quickly become the most widely used wideband codec for SIP as well as the most widely used wideband codec, period. That is, if the Nokia deal stands.

Polycom has been leading the wideband codec charge on deskphones, and it already co-brands a phone with Skype. It would make sense for Polycom to add SILK to its entire line of IP phones.

For network applications like voice, Metcalfe’s Law is like gravity. Skype has over 400 million users. If the royalty-free license has no catches, the wideband codec debate is history, at least until LTE brings AMR-WB to mass-market cell phones.

Skype’s new super-wideband codec

I spoke with Jonathan Christensen of Skype yesterday, about the new codec in the latest Windows beta of Skype:

MS: Skype announced a new voice codec at CES. What’s different about it from the old one?

JC: The new codec is code-named SILK. Compared to its predecessor, SVOPC, the new codec gives the same or better audio response at half the bit-rate for wideband, and we also introduced a super wideband mode. SVOPC is a 16kHz sample rate, 8kHz audio bandwidth. The new codec has that mode as well, but it also has a 24 kHz sample rate, 12 kHz audio bandwidth mode. Most USB headsets have enough capture and render fidelity that you can experience the 12 kHz super wideband audio.

MS: Is the new codec an evolution of SVOPC?

JC: The new codec was a separate development branch from SVOPC. It has been under development for over 3 years, during which we focused both on the codec and the echo canceller and all the surrounding bits, and eventually got all that put together.

MS: What about the computational complexity?

JC: The new codec design point was different from SVOPC. SVOPC was designed for use on the desktop with a math coprocessor. It is actually pretty efficient. It’s just that it has a number of floats in it so it becomes extremely inefficient when it’s not on a PC.
The new codec’s design goal was to be ultra lightweight and embeddable. The vast majority of the addressable device market is better suited to fixed point, so it’s written in fixed point ANSI C – it’s as lightweight as a codec can be in terms of CPU utilization. Our design point was to be able to put it into mobile devices where battery life and CPU power are constrained, and it took almost 3 years to put it together. It’s a fundamental, ground up development; lots of very interesting science going into it, and a really talented developer leading the project. And now it’s ready. It’s a pretty significant jump forward.

MS: Is the new codec based on predictive voice coding?

JC: SVOPC has two modes, an audio mode and a speech mode, and the speech mode is much more structured towards speech. The new codec strikes little bit more of a balance between a general audio coder and a speech coder. So it does a pretty good job with stuff like background noise and music. But to get that kind of bit-rate reduction there are things about speech that you can capitalize on and get huge efficiency; we didn’t toss all that out. We are definitely using some of the model approach.

MS: Normally one expects with an evolution for the increments to get smaller over time. With the new codec you are getting a 50% improvement in bandwidth utilization, so you can’t be at the incremental stage yet?

JC: I don’t think we are. We were listening to samples from various versions of the client going back to 2.6, now we are at 4.0. In the same situation – pushing the same files in the same acoustic settings through the different client versions – in every release there’s a noticeable (even to the naked ear) difference in quality between the releases.

We are not completely done with it. There are many different areas where we can continue to optimize and tweak it, but we believe it’s at or above the current state of the industry in terms of performance.

————–

Skype 4.0 for Windows has the new codec.
The current Mac beta doesn’t yet support the new codec.

Update: February 3rd,2009: Here is a write-up of SILK from the Skype Journal.

Update: March 7th, 2009: Skype has announced an SDK for third parties to implement SILK in their products, royalty free.

Wideband codecs and IPR

Wideband codecs are a good thing. They have been slow to enter the mainstream, but there are several reasons why this is about to change.

Voice codecs are benefiting from the usual good effects of Moore’s law. Each year higher-complexity (higher computation load) codecs become feasible on low-cost hardware, and each year it is cheaper to fit multiple codecs into a ROM (adding multiple codecs increases the chance that two endpoints will have one in common).

Voice codecs are often burdened by claims of intellectual property rights (IPR) by multiple players. This can make it difficult for software and equipment vendors to use codecs in their products without fear of litigation. The industry response has been to create “patent pools” where the patent owners agree to let a single party negotiate a blanket license on their behalf:

Prior to establishment of the Pool, the complexity of negotiating IPRs with each intellectual property owner discouraged potential integrators.

Unfortunately there is still no pool for the standard wideband codec ratified by the 3GPP for use in cell phones, AMR-WB (G.722.2). Even where there is a pool, getting a license from it doesn’t mean that a use of the codec doesn’t infringe some yet-to-be-revealed patent not in the pool, and it doesn’t indemnify the licensee from such a claim.

There are several royalty-free wideband codecs available. I mentioned a couple of them (from Microsoft and from Skype) in an Internet Telephony Column.

Microsoft and Skype have got around the royalty issue to some extent by creating proprietary codecs. They have researched their algorithms and have either concluded that they don’t infringe or have bought licenses for the patents they use.

G.722 (note that G.722, G.722.1 and G.722.2 are independent of each other, both technically and from the point of view of IPR) is so old that its patent restrictions have expired, making it an attractive choice of common baseline wideband codec for all devices. Unfortunately its antiquity also means that it is relatively inefficient in its use of bandwidth.

Polycom did a major good thing for the industry when it made G.722.1 (Siren7) available on a royalty-free basis. G.721.1 is considerably better than G.722, though it is not as efficient as G.722.2.

The open-source Speex codec is efficient and royalty free, but being open source it bears a little more fear of infringement than the other codecs mentioned here. There are three reasons why this fear may be misplaced. First, the coders claim to have based it on old (1980’s) technology. Second, it has now been available for some years, and has been shipped by large companies and no claims of infringement have surfaced. Third, while it is possible in these times of outrageous patent trolling that somebody will pop up with some claim against Speex, a similar risk exists for all the other codecs, including the ones with patent pools.

So we now have three royalty-free wideband codecs (G.722, G.722.1 and Speex); we have hardware capable of running them cheaply; we have broad deployment of VoIP and growing implementation of VoIP trunking. We have increasing data bandwidth to homes and businesses, to the point where the bandwidth demands of voice are trivial compared to other uses like streaming video and music downloads. Plus there’s a wild card. By 2010 over 300 million people will have mobile smartphones capable of running software that will give them wideband phone conversations over a Wi-Fi connection.

Perhaps the time for wideband telephony is at hand.

Wideband audio conferencing bridge

Skype lets you do audio conferencing with wideband codecs, and a service called Vapps High Definition Conferencing does the same thing for non-Skype VoIP calls.

Now other VoIP providers can offer wideband conferencing too. A company called Wyde Voice sells an all-IP conferencing platform that natively uses wideband codecs. The Wyde platform uses the iSAC codec from GIPS, so anybody calling in from a soft phone like the Gismo5 client, or the Google, AOL or Yahoo VoIP clients can enjoy a conference in wideband. If one of the participants in the call is using a narrow-band codec, the Wyde device up-samples the signal to wideband quality for mixing.

I have always been an enthusiastic proponent of wideband audio – it is one of the major potential advantages of VoIP over circuit switched telephony. Circuit switched calls are encoded with G.711, which yields 12 bits of effective dynamic range and a maximum frequency of about 3.5KHz. Human speech has harmonics even above 10KHz, which is why it is hard to tell the difference between an “F” and an “S” over the phone. The G.711 codec places an absolute limit on the sound quality of a regular phone call. A VoIP phone call can use a wideband codec, with whatever dynamic range and frequency range you want. There are several of them, commonly with a sample size of 16 bits and a sampling rate of 16KHz which captures a maximum audio frequency of 8KHz. When you have a good enough connection Skype uses a wideband codec by default, which is why it can sound better than “toll quality” (if you aren’t limited by your loudspeaker and microphone.)

Unfortunately, for the non-Skype world there’s a chicken and egg problem – almost no phones support wideband codecs, so the carriers aren’t motivated to support them either. Worse, any VoIP call that traverses the PSTN at any point is converted to G.711, losing the wideband frequencies. Worse yet, to cut costs most carrier implementations of VoIP use a bandwidth-saving codec that intrinsically delivers inferior sound quality to G.711; for example, last I heard Vonage was using G.729A.

As VoIP matures, and more and more calls are IP end-to-end through VoIP peering and ENUM arrangements (what Gizmo5 calls “back-door dialing”) wideband codecs will become more pervasive and our conversations will become clearer. The Wyde announcement is a step towards that world.

CSR pitches better sound quality, battery life in Bluetooth headsets

CSR announced their Bluecore 6 chip today. It will ship in production volumes in January 2008. CSR claims a more robust connection – with increased transmit power and receive sensitivity. CSR also claims a breakthrough in sound quality, achieved by going from a Continuous Variable Slope Delta (CVSD) codec to Adaptive Differential Pulse Code Modulation (ADPCM). This enables packet retransmission and a halving of transmission bandwidth. The reduced bandwidth requirement results in a reduction in power consumption, and the ADPCM codec yields a MOS of 4.14 compared with a maximum of 2.41 for CVSD.

This is a welcome change, but doesn’t really go far enough. What’s needed is a wideband codec like AMR-WB to yield better-than-toll quality sound. While this would be redundant in a regular cell phone – ADPCM is more than adequate to carry a signal that has been encoded in GSM – it would make a huge difference in dual-mode phones carrying Voice over Wi-Fi.