Tag Archives: CPU

3GPP Burns Midnight Oil for 5G

10 Sep

Long hours, streamlined features to finish draft. The race is on to deliver some form of 5G as soon as possible.

An Intel executive painted a picture of engineers pushing the pedal to the metal to complete an early version of the 5G New Radio (NR) standard by the end of the year. She promised that Intel will have a test system based on its x86 processors and FPGAs as soon as the spec is finished.

The 3GPP group defining the 5G NR has set a priority of finishing a spec for a non-standalone version by the end of the year. It will extend existing LTE core networks with a 5G NR front end for services such as fixed-wireless access.

After that work is finished, the radio-access group will turn its attention to drafting a standalone 5G NR spec by September 2018.

“Right now, NR non-standalone is going fine with lots of motivation, come hell or high water, to declare a standard by the end of December,” said Asha Keddy, an Intel vice president and general manager of its next-generation and standards group. “The teams don’t even break until 10 p.m. on many days, and even then, sometimes they have sessions after dinner.”

To lighten the load, a plenary meeting of the 3GPP radio-access group next week is expected to streamline the proposed feature set for non-standalone NR. While a baseline of features such as channel coding and subcarrier spacing have been set, some features are behind schedule for being defined, such as MIMO beam management, said Keddy.

It’s hard to say what features will be in or out at this stage, given that decisions will depend on agreement among carriers. “Some of these are hit-or-miss, like when [Congress] passes a bill,” she said.

It’s not an easy job, given the wide variety of use cases still being explored for 5G and the time frames involved. “We are talking about writing a standard that will emerge in 2020, peak in 2030, and still be around in 2040 — it’s kind of a responsibility to the future,” she said.

The difficulty is even greater given carrier pressure. For example, AT&T and Verizon have announced plans to roll out fixed-wireless access services next year based on the non-standalone 5G NR, even though that standard won’t be formally ratified until late next year.

N

An Intel 5G test system in the field. (Images: Intel)

An Intel 5G test system in the field. (Images: Intel)

Companies such as Intel and Qualcomm have been supplying CPU- and FPGA-based systems for use in carrier trials. They have been updating the systems’ software to keep pace with developments in 3GPP and carrier requests.

For its part, Intel has deployed about 200 units of its 5G test systems to date. They will be used on some of the fixed-wireless access trials with AT&T and Verizon in the U.S., as well as for other use cases in 5G trials with Korea Telecom and NTT Docomo in Japan.

Some of the systems are testing specialized use cases in vertical markets with widely varied needs, such as automotive, media, and industrial, with companies including GE and Honeywell. The pace of all of the trials is expected to pick up next year once the systems support the 5G non-standalone spec.

Intel’s first 5G test system was released in February 2016 supporting sub-6-GHz and mm-wave frequencies. It launched a second-generation platform with integrated 4×4 MIMO in August 2016.

The current system supports bands including 600–900 MHz, 3.3–4.2 GHz, 4.4–4.9 GHz, 5.1–5.9 GHz, 28 GHz, and 39 GHz. It provides data rates up to 10 Gbits/second.

Keddy would not comment on Intel’s plans for dedicated silicon for 5G either in smartphones or base stations.

In January, Intel announced that a 5G modem for smartphones made in its 14-nm process will sample in the second half of this year. The announcement came before the decision to split NR into the non-standalone and standalone specs.

Similarly, archrival Qualcomm announced late last year that its X50 5G modem will sample in 2017. It uses eight 100-MHz channels, a 2×2 MIMO antenna array, adaptive beamforming techniques, and 64 QAM to achieve a 90-dB link budget and works with a separate 28-GHz transceiver and power management chips.

Source: http://www.eetimes.com/document.asp?doc_id=1332248&page_number=2

Does Cloud Solve or Increase the “Four Pillars” Problem?

15 Oct
It has long been said – often by this author – that there are four pillars to application performance:
  1. Memory
  2. CPU
  3. Network
  4. Storage

As soon as you resolve one in response to application response times, another becomes the bottleneck, even if you are not hitting that bottleneck yet.

For a bit more detail, they are

  • “memory consumption” – because this impacts swapping in modern Operating Systems.
  • “CPU utilization” – because regardless of OS, there is a magic line after which performance degrades radically.
  • “Network throughput” – because applications have to communicate over the network, and blocking or not (almost all coding for networks today is), the information requested over the network is necessary and will eventually block code from continuing to execute.
  • “Storage” – because IOPS matter when writing/reading to/from disk (or the OS swaps memory out/back in).

These four have long been relatively easy to track. The relationship is pretty easy to spot, when you resolve one problem, one of the others becomes the “most dangerous” to application performance. But historically, you’ve always had access to the hardware. Even in highly virtualized environments, these items could be considered both at the Host and Guest level – because both individual VMs and the entire system matter.

When moving to the cloud, the four pillars become much less manageable. The amount “much less” implies depends a lot upon your cloud provider, and how you define “cloud”.

Put in simple terms, if you are suddenly struck blind, that does not change what’s in front of you, only your ability to perceive it.

In the PaaS world, you have only the tools the provider offers to measure these things, and are urged not to think of the impact that host machines may have on your app. But they do have an impact. In an IaaS world you have somewhat more insight, but as others have pointed out, less control than in your datacenter.

Picture Courtesy of Stanley Rabinowitz, Math Pro Press.

In the SaaS world, assuming you include that in “cloud”, you have zero control and very little insight. If you app is not performing, you’ll have to talk to the vendors’ staff to (hopefully) get them to resolve issues.

But is the problem any worse in the cloud than in the datacenter? I would have to argue no. Your ability to touch and feel the bits is reduced, but the actual problems are not. In a pureplay public cloud deployment, the performance of an application is heavily dependent upon your vendor, but the top-tier vendors (Amazon springs to mind) can spin up copies as needed to reduce workload. This is not a far cry from one common performance trick used in highly virtualized environments – bring up another VM on another server and add them to load balancing. If the app is poorly designed, the net result is not that you’re buying servers to host instances, it is instead that you’re buying instances directly.

This has implications for IT. The reduced up-front cost of using an inefficient app – no matter which of the four pillars it is inefficient in – means that IT shops are more likely to tolerate inefficiency, even though in the long run the cost of paying monthly may be far more than the cost of purchasing a new server was, simply because the budget pain is reduced.

There are a lot of companies out there offering information about cloud deployments that can help you to see if you feel blind.

Fair disclosure, F5 is one of them, I work for F5. That’s all you’re going to hear on that topic in this blog.

While knowing does not always directly correlate to taking action, and there is some information that only the cloud provider could offer you, knowing where performance bottlenecks are does at least give some level of decision-making back to IT staff. If an application is performing poorly, looking into what appears to be happening (you can tell network bandwidth, VM CPU usage, VM IOPS, etc, but not what’s happening on the physical hardware) can inform decision-making about how to contain the OpEx costs of cloud.

Internal cloud is a much easier play, you still have access to all the information you had before cloud came along, and generally the investigation is similar to that used in a highly virtualized environment. From a troubleshooting performance problems perspective, it’s much the same. The key with both virtualization and internal (private) clouds is that you’re aiming for maximum utilization of resources, so you will have to watch for the bottlenecks more closely – you’re “closer to the edge” of performance problems, because you designed it that way.

A comprehensive logging and monitoring environment can go a long way in all cloud and virtualization environments to keeping on top of issues that crop up – particularly in a large datacenter with many apps running.

And developer education on how not to be a resource hog is helpful for internally developed apps. For externally developed apps the best you can do is ask for sizing information and then test their assumptions before buying.

Sometimes, cloud simply is the right choice. If network bandwidth is the prime limiting factor, and your organization can accept the perceived security/compliance risks, for example, the cloud is an easy solution – bandwidth in the cloud is either not limited, or limited by your willingness to write a monthly check to cover usage. Either way, it’s not an Internet connection upgrade, which can be dastardly expensive not just at install, but month after month.

Keep rocking it. Get the visibility you need, don’t worry about what you don’t need.

Why the iPhone 5 Lacks Support for Simultaneous Voice and LTE or EVDO (SVLTE, SVDO)

14 Sep

So we’ve seen the new iPhone, and had a chance to briefly play with it at the demo room, but as I’ve learned in the past so many times you only really know a handset after you’ve taken a look at the FCC test reports or spent a few days with it yourself. On my flights home, I typed up our iPod and EarBud impressions piece, but also pored over those FCC test reports for the iPhone 5, and it became immediately obvious the iPhone 5 doesn’t support simultaneous voice and data on CDMA2000 carriers such as Sprint and Verizon in the US.

The reasons, as always, are somewhat technical but at a high level pretty simple. Suffice it to say that at a high level this is a design decision which makes the phone as small and light as it is (it really is light, almost alarmingly so) and enables it to support a wide number of LTE bands, rather than some major oversight like I’ve seen it portrayed.

First, a bit of overview. At the Apple event there were two physical hardware models for the iPhone 5 announced: A1428 and A1429, with three different provisioning configurations. There are hardware differences between the two models, and what provisioning boils down to is both how the phone is initially provisioned, and likely what AMSS (Advanced Mobile Subscriber Software – Qualcomm’s software package that runs on the baseband) gets loaded at boot. This is analogous to how with iPhone 4S there was a single hardware model, but two different configurations for CDMA and GSM.

For the most part, the two hardware models are identical, and to most users the two models will be indistinguishable, but there are physical differences to accommodate a number of different LTE bands between the two. The Apple iPhone 5 specs page lists this at a high level, and there’s an even more explicit LTE specific page with a list of what LTE bands work for what carriers. Of course, what ultimately really matters is what’s in the FCC docs and in the hardware, and looking at those there are a few more bands supported than listed for LTE.

Why this is the case is interesting and a function of transceiver and Apple’s implementation – with Qualcomm’s transceivers (specifically RTR8600 in the iPhone 5, but this applies to others as well), each “port” is created equal, and can handle WCDMA or LTE equally. If your design includes the right power amplifiers (PA), filters, and antenna tuning, you’re good to go, which is why we see LTE testing reports for bands that aren’t listed otherwise. I saw this same behavior with Apple’s test reports for the iPad 3 with LTE as well, there were many more tested configurations than what any carrier in the USA will run, but remember Apple’s approach is about covering as many possible configurations as possible with the fewest number of SKUs.

I’ve made a table of what the cellular band support breakdown is for the three iPhone 5 configurations:

Apple   iPhone 5 Models

iPhone 5 Model

GSM/EDGE   Bands

WCDMA   Bands

CDMA   1x/EVDO Rev.A/B Bands

LTE   Bands (FCC+Apple)

A1428 “GSM”

850/900/1800/1900   MHz

850/900/1900/2100   MHz

N/A

2/4/5/17

A1429 “CDMA”

850/900/1800/1900   MHz

850/900/1900/2100   MHz

800/1900/2100   MHz

1/3/5/13/25

A1429 “GSM”

850/900/1800/1900   MHz

850/900/1900/2100   MHz

NA

1/3/5   (13/25 unused)

Note that we now have both quad band GSM/EDGE and WCDMA across all three models. All three configurations support WCDMA with up to HSDPA Category 24 (DC-HSPA+ with 64QAM for 42 Mbps downlink) and HSUPA Category 6 (5.76 Mbps uplink) as far as I’m aware. Only the A1429 “CDMA” configuration supports CDMA2000 1x and EVDO, and interestingly enough even supports EVDO Rev.B which includes carrier aggregation, though no carrier in the USA will ever run it. In addition the FCC reports include 1xAdvanced testing and certification for CDMA Band Classes 0 (800 MHz), 1 (1900 MHz), and 10 (Secondary 800 MHz), so I have no idea why Sprint is saying it won’t work with their “HD Voice” (really 1xAdvanced) deployment, but I’m digressing yet again…

Looking at just the LTE band numbers is hard unless you have them internalized, so I made yet another table which focuses just on that aspect:

Apple   iPhone LTE Band Coverage

E-UTRA (LTE) Band Number

Applicable   iPhone Model

Commonly   Known Frequency (MHz)

Bandwidths   Supported (MHz)

1

A1429

2100

20,   15, 10, 5 (?)

2

A1428

1900

20,   15, 10, 5, 3, 1.4

3

A1429

1800

20,   15, 10, 5, 3, 1.4 (?)

4

A1428

1700/2100

20,   15, 10, 5, 3, 1.4

5

A1428,   A1429

850

10, 5,   3, 1.4

13

A1429

700   Upper C

10, 5

17

A1428

700   Lower B/C

10, 5

25

A1429

1900

20,   15, 10, 5, 3, 1.4

So you can see how with two different hardware models, Apple is able to support no fewer than 8 LTE bands with largely the same hardware – the same display, chassis, battery, form factor, and PCB outline (different power amplifiers and filters are required), and roughly the same exterior antennas (gain is different on the primary bottom antenna between the two models, no doubt they’re tuned differently). Previously most handsets I’ve seen have been destined to work only on a single carrier, and thus implement at most one or two LTE bands.

If you look at Apple’s iPhone lineup historically, there’s this obvious two year cadence which jumps out at you, and it applies even to cellular. I’ve removed the CPU and GPU parts so this table doesn’t get too huge, but historically Apple has been very careful about engineering a platform that will last for a while.

Apple   iPhone – Cellular Trends

Release   Year

Industrial   Design

Cellular   Baseband

Cellular   Antennas

iPhone

2007

1st   gen

Infineon   S-Gold 2

1

iPhone 3G

2008

2nd   gen

Infineon   X-Gold 608

1

iPhone 3GS

2009

2nd   gen

Infineon   X-Gold 608

1

iPhone 4 (GSM/UMTS)

2010

3rd   gen

Infineon   X-Gold 618

1

iPhone 4 (CDMA)

2011

3rd   gen

Qualcomm   MDM6600

2
(Rx diversity, No Tx diversity)

iPhone 4S

2011

3rd   gen

Qualcomm   MDM6610 (MDM6600 w/ ext. trans)

2
(2 Rx/1 Tx diversity)

iPhone 5

2012

4th   gen

Qualcomm   MDM9615 w/RTR8600 ext. trans

2
(2 Rx/1 Tx diversity)

It was touched on in the keynote, but the iPhone 5 likewise inherits the two-antenna cellular design that was touted from the 4S. This is the original mitigation for iPhone 4 “deathgrip” which was introduced somewhat quietly in the iPhone 4 (CDMA), and carried over to the 4S with one additional improvement – the phone included a double pole, double throw switch which allowed it to change which antenna was used for transmit as well to completely quash any remaining unwarranted attenuation. While receive diversity was a great extra for the 4S that drastically improved cellular performance at cell edges, in LTE 2-antenna receive diversity is now thankfully mandatory, leaving the base LTE antenna configuration a two-antenna setup (two Rx, one shared for Tx). Thankfully, Apple already had that antenna architecture worked out with the 4S, and carried it over to the iPhone 5.

So now that we’re done with all that, where the heck does simultaneous voice and data fit into the picture? Again, it comes down to antennas, design decisions, and band support.

First a bit of history: first generation LTE phones on Verizon used a combination of two cellular architectures to deliver both LTE and CDMA1x/EVDO capabilities. Quite literally there were two basebands, two transmit chains, and at least three antennas: a two-antenna setup for LTE, and 1 transmit for CDMA 1x/EVDO duties. Usually this boiled down to a shared diversity receive antenna for LTE and CDMA 1x/EVDO, and discrete transmit antennas for LTE and CDMA1x/EVDO.


Samsung Galaxy S 3 for VZW (SCH-I535) antennas (3 for cellular)

Modernizations from Qualcomm have since reduced the number of digital basebands required to just one (with MSM8960 and MDM9x15) which helped improve battery life, but the end implementation still requires the same three-antenna solution. This configuration enables both SVLTE (simultaneous voice and LTE) and SVDO (simultaneous voice and EVDO) with one modem but still requires those two transmit RF chains to work for CDMA phones. I should mention that these methods are all in place to accommodate the fact that as of yet almost no CDMA networks implement VoLTE (Voice over LTE or Voice over IMS), except MetroPCS. To make this single radio simultaneous LTE and CDMA architecture work requires making another RF path.

On WCDMA/GSM carriers, the path forward until we get to VoLTE is what’s called circuit-switched fallback (CS-FB). This quite literally means you drop from 4G LTE to 3G WCDMA (where voice and data are already multiplexed) for the call, then hand back up to LTE when you’re finished. This is the way that voice works at the moment for all GSM/WCDMA carriers, and on all those handsets with LTE to date.


From iPhone 5’s FCC Test Reports

So onto the iPhone – we know definitively that the iPhone 5 definitely doesn’t support either SVDO or SVLTE. It’s as simple as looking at the FCC documents and the appropriate sections in the allowed and tested simultaneous transmitters section for SAR (Specific Absorption Rate) testing. There, it’s spelled out that only one air interface can be active at a time, and that only one antenna can be selected for transmit at a time. There’s also an explicitly called out mention to VoLTE not being supported. I didn’t replicate the entire table of simultaneous transmission combinations which need to be tested (it is a huge table) there are no entries with CDMA Voice being active at the same time as any data mode save WiFi. This has been confirmed as well by later statements by the carriers and Apple.

From Apple’s perspective, no doubt the iPhone 5 not supporting simultaneous voice and data on CDMA carriers isn’t the end of the world. After all, iPhone 4 and 4S customers have ostensibly been using their phones without that functionality just fine for some time now – this is just a logical extension. At the same time, LTE handsets on Verizon and Sprint which currently support both SVDO and SVLTE will have a differentiator, and from what I’ve been told the inclusion of SVDO is more of a “delighter” than core feature.

What it really boils down to is that by using this single Tx chain, Apple is able to support a ton of LTE bands (more space for PAs and fewer transceiver ports used on SVLTE for CDMA networks) and also do it without making the iPhone very large. Moving to an architecture that works with SVDO and SVLTE would require an additional transmit path and antenna, and incur a size and weight penalty. The other reality is that CS-FB is the way voice coexists with LTE in the vast majority of cases globally, and thus engineering all that just for CDMA networks is at odds with Apple’s ultimate desire to deliver as few models as possible.

In the future all of this overhead to implement voice with legacy 3G and 2G networks will largely go away and exist only as a handover option for when LTE service isn’t available. Voice over LTE is indeed coming soon.

Update: The other LTE related iPhone 5 thing to explain is why LTE support for 800 MHz and 2.6 GHz isn’t present. This article originally started as an explanation for why SVLTE and SVDO support are not absent, I think there’s even more discussion necessary about what the likely plan of attack from Apple is for including both more LTE bands and that TD-SCDMA phone for China. I plan to address that in our upcoming review.

Source: http://www.anandtech.com/show/6295/why-the-iphone-5-lacks-simultaneous-voice-and-lte-or-evdo-svlte-svdo-support-  by Brian Klug on 9/14/2012 2:24:00 AM

%d bloggers like this: