Tag Archives: WiMAX

WiMAX vs. LTE vs. HSPA+: who cares who wins?

2 Oct

Who cares who wins the 4G cup?

“We must stop the confusion about which technology is going to win; it achieves nothing positive and risks damage to the entire industry.”

Anyone among the curious band of people who track articles about the status of mobile broadband (and the chances are that you are one of them) will have noticed an interesting trend over the past 18 months: the temperature of the debate about the technology most likely to succeed is rising rapidly. Increasingly polarised articles are published on a daily basis, each arguing that Long Term Evolution (LTE) is the 4G technology of choice, or that WiMAX is racing ahead, or that it’s best to stick with good old 3GPP because HSPA+ is going to beat both of them. It remains surprising that their articles invite us, their readers, to focus slavishly on the question “WiMAX vs. LTE vs. HSPA+: which one will win?”

The question that we should ask of the authors is “Who cares who wins?” The torrent of propaganda washes over the essence of mobile broadband and puts sustained growth in the mobile industry at risk. By generating fear, uncertainty and doubt, the mobile broadband “battle” diverts attention away from the critical issues that will determine the success or failure of these evolving technologies.  The traditional weapon of the partisan author is the mighty “Mbps”; each wields their peak data rates to savage their opponents.

In the HSPA+ camp, authors fire out theoretical peak data rates of 42Mbps DL and 23 Mbps UL. The WiMAX forces respond with theoretical peak data rates of 75Mbps DL and 30Mbps UL. LTE joins the fray by unleashing its theoretical peak data rates of 300Mbps DL and 75 Mbps UL. All hell breaks loose, or so it would appear. Were it not for the inclusion of the word “theoretical”, we could all go home to sleep soundly and wake refreshed, safe in the knowledge that might is right. The reality is very different.

Sprint has stated that it intends to deliver services at between 2 and 4 Mbps to its customers with Mobile WiMAX. In the real world, HSPA+ and LTE are likely to give their users single digit Mbps download speeds.  Away from the theoretical peak data rates, the reality is that the technologies will be comparable with each other, at least in the experience of the user. These data rates, from a user’s perspective, are a great improvement on what you will see while sitting at home on your WiFi or surfing the web while on a train. The problem is that the message being put out to the wider population has the same annoying ringtone as those wild claims that were made about 3G and the new world order that it would usher in. Can you remember the allure of video calls? Can you remember the last time you actually saw someone making a video call?

3G has transformed the way that people think about and use their mobile phones, but not in the way that they were told to expect. In the case of 3G, mismanagement of customer expectations put our industry back years. We cannot afford to repeat this mistake with mobile broadband. Disappointed customers spend less money because they don’t value their experience as highly as they had been led to expect by advertisers.  Disappointed customers share their experience with friends and family, who delay buying into the mobile broadband world.  What we all want are ecstatic customers who can’t help but show off their device. We need to produce a ‘Wow’ factor that generates momentum in the market.

Every pundit has a pet theory about the likely deployment of mobile broadband technologies. One will claim that HSPA+ might delay the deployment of LTE. Another will posit that WiMAX might be adopted, predominantly, in the laptop or netbook market. A third will insist that LTE could replace large swathes of legacy technologies.  These scenarios might happen, but they might not, too.

More likely, but less stirring, is the prediction that they are all coming, they’ll be rolled out to hundreds of millions of subscribers and, within five years, will be widespread. We must stop the confusion about which technology is going to win; it achieves nothing positive and risks damage to the entire industry.

Confusion unsettles investors, who move to other markets and starve us of the R&D funds needed to deliver mobile broadband. At street level, confusion leads early adopters to hold off making commitments to the new wave of technology while they “wait it out” to ensure they don’t buy a Betamax instead of a VHS.  Where we should focus, urgently, is on the two topics that demand open discussion and debate. First, are we taking the delivery of a winning user experience seriously? Secondly, are we making plans to cope with the data tidal wave that will follow a successful launch?

The first topic concerns delivery to the end user of a seamless application experience that successfully converts the improved data rates to improvements on their device. This can mean anything from getting LAN-like speeds for faster email downloads through to slick, content-rich and location-aware applications. As we launch mobile broadband technologies, we must ensure that new applications and capabilities are robust and stable. More effort must be spent developing and testing applications so that the end user is blown away by their performance.

The second topic, the tidal wave of data, should force us to be realistic about the strain placed on core networks by an exponential increase in data traffic. We have seen 10x increases in traffic since smartphones began to boom. Mobile device makers, network equipment manufacturers and application developers must accept that there will be capacity shortages in the short term and, in response, must design, build and test applications rigorously. We need applications with realistic data throughput requirements and the ability to catch data greedy applications before they reach the network.

In Anite, we see the demands placed on test equipment by mobile broadband technologies at first hand. More than testing the technical integrity of the protocol stack and its conformance to the core specifications, we produce new tools that test applications and simulate the effects of anticipated capacity bottlenecks. Responding to the increased demand for mobile applications, we’re developing test coverage that measures applications at the end-user level. Unfortunately, not everyone is thinking that far ahead. Applications that should be “Wow”, in theory, may end up producing little more than a murmur of disappointment in the real world.

So, for the sake of our long-term prospects, let’s stop this nonsense about how one technology trounces another. Important people, the end users, simply do not care.  WiMAX, LTE and HSPA+ will all be widely deployed. As an industry, our energy needs to be focused on delivering services and applications that exceed the customer expectations.  Rather than fighting, we should be learning from each other’s experiences.  If we do that, our customers will reward us with growing demand. If we all get sustained growth, then don’t we all win..?

Source: http://www.telecoms.com/11695/wimax-vs-lte-vs-hspa-who-cares-who-wins/

Advertisements

WiMAX vs. LTE vs. HSPA+: who cares who wins?

6 Feb

Who cares who wins the 4G cup?

“We must stop the confusion about which technology is going to win; it achieves nothing positive and risks damage to the entire industry.”

Anyone among the curious band of people who track articles about the status of mobile broadband (and the chances are that you are one of them) will have noticed an interesting trend over the past 18 months: the temperature of the debate about the technology most likely to succeed is rising rapidly. Increasingly polarised articles are published on a daily basis, each arguing that Long Term Evolution (LTE) is the 4G technology of choice, or that WiMAX is racing ahead, or that it’s best to stick with good old 3GPP because HSPA+ is going to beat both of them. It remains surprising that their articles invite us, their readers, to focus slavishly on the question “WiMAX vs. LTE vs. HSPA+: which one will win?”

The question that we should ask of the authors is “Who cares who wins?” The torrent of propaganda washes over the essence of mobile broadband and puts sustained growth in the mobile industry at risk. By generating fear, uncertainty and doubt, the mobile broadband “battle” diverts attention away from the critical issues that will determine the success or failure of these evolving technologies.  The traditional weapon of the partisan author is the mighty “Mbps”; each wields their peak data rates to savage their opponents.

In the HSPA+ camp, authors fire out theoretical peak data rates of 42Mbps DL and 23 Mbps UL. The WiMAX forces respond with theoretical peak data rates of 75Mbps DL and 30Mbps UL. LTE joins the fray by unleashing its theoretical peak data rates of 300Mbps DL and 75 Mbps UL. All hell breaks loose, or so it would appear. Were it not for the inclusion of the word “theoretical”, we could all go home to sleep soundly and wake refreshed, safe in the knowledge that might is right. The reality is very different.

Sprint has stated that it intends to deliver services at between 2 and 4 Mbps to its customers with Mobile WiMAX. In the real world, HSPA+ and LTE are likely to give their users single digit Mbps download speeds.  Away from the theoretical peak data rates, the reality is that the technologies will be comparable with each other, at least in the experience of the user. These data rates, from a user’s perspective, are a great improvement on what you will see while sitting at home on your WiFi or surfing the web while on a train. The problem is that the message being put out to the wider population has the same annoying ringtone as those wild claims that were made about 3G and the new world order that it would usher in. Can you remember the allure of video calls? Can you remember the last time you actually saw someone making a video call?

3G has transformed the way that people think about and use their mobile phones, but not in the way that they were told to expect. In the case of 3G, mismanagement of customer expectations put our industry back years. We cannot afford to repeat this mistake with mobile broadband. Disappointed customers spend less money because they don’t value their experience as highly as they had been led to expect by advertisers.  Disappointed customers share their experience with friends and family, who delay buying into the mobile broadband world.  What we all want are ecstatic customers who can’t help but show off their device. We need to produce a ‘Wow’ factor that generates momentum in the market.

Every pundit has a pet theory about the likely deployment of mobile broadband technologies. One will claim that HSPA+ might delay the deployment of LTE. Another will posit that WiMAX might be adopted, predominantly, in the laptop or netbook market. A third will insist that LTE could replace large swathes of legacy technologies.  These scenarios might happen, but they might not, too.

More likely, but less stirring, is the prediction that they are all coming, they’ll be rolled out to hundreds of millions of subscribers and, within five years, will be widespread. We must stop the confusion about which technology is going to win; it achieves nothing positive and risks damage to the entire industry.

Confusion unsettles investors, who move to other markets and starve us of the R&D funds needed to deliver mobile broadband. At street level, confusion leads early adopters to hold off making commitments to the new wave of technology while they “wait it out” to ensure they don’t buy a Betamax instead of a VHS.  Where we should focus, urgently, is on the two topics that demand open discussion and debate. First, are we taking the delivery of a winning user experience seriously? Secondly, are we making plans to cope with the data tidal wave that will follow a successful launch?

The first topic concerns delivery to the end user of a seamless application experience that successfully converts the improved data rates to improvements on their device. This can mean anything from getting LAN-like speeds for faster email downloads through to slick, content-rich and location-aware applications. As we launch mobile broadband technologies, we must ensure that new applications and capabilities are robust and stable. More effort must be spent developing and testing applications so that the end user is blown away by their performance.

The second topic, the tidal wave of data, should force us to be realistic about the strain placed on core networks by an exponential increase in data traffic. We have seen 10x increases in traffic since smartphones began to boom. Mobile device makers, network equipment manufacturers and application developers must accept that there will be capacity shortages in the short term and, in response, must design, build and test applications rigorously. We need applications with realistic data throughput requirements and the ability to catch data greedy applications before they reach the network.

In Anite, we see the demands placed on test equipment by mobile broadband technologies at first hand. More than testing the technical integrity of the protocol stack and its conformance to the core specifications, we produce new tools that test applications and simulate the effects of anticipated capacity bottlenecks. Responding to the increased demand for mobile applications, we’re developing test coverage that measures applications at the end-user level. Unfortunately, not everyone is thinking that far ahead. Applications that should be “Wow”, in theory, may end up producing little more than a murmur of disappointment in the real world.

So, for the sake of our long-term prospects, let’s stop this nonsense about how one technology trounces another. Important people, the end users, simply do not care.  WiMAX, LTE and HSPA+ will all be widely deployed. As an industry, our energy needs to be focused on delivering services and applications that exceed the customer expectations.  Rather than fighting, we should be learning from each other’s experiences.  If we do that, our customers will reward us with growing demand. If we all get sustained growth, then don’t we all win..?

Source: http://www.telecoms.com/11695/wimax-vs-lte-vs-hspa-who-cares-who-wins/

Broadband

28 Jan

Technically, broadband refers to the carrying of multiple communications channels in a single wire or cable. In the broader sense used here, broadband refers to high-speed data transmission over the Internet using a variety of technologies (data communications and telecommubroadband nications). This can be distinguished from the relatively slow (56 Kbps or slower) dial-up phone connections used by most home, school, and small business users until the late 1990s. A quantitative change in speed results in a qualitative change in the experience of the Web, making continuous multimedia (video and sound) transmissions possible.

broadband

Broadband Technologies The earliest broadband technology to be developed consists of dedicated point-to-point telephone lines designated T1, T2, and T3, with speeds of 1.5, 6.3, and 44.7 Mbps respectively. These lines provide multiple data and voice channels, but cost thousands of dollars a month, making them practicable only for large companies or institutions.

Two other types of phone line access offer relatively high speed at relatively low cost. The earliest, ISDN (Integrated Services Digital Network) in typical consumer form offers two 64 Kbps channels that can be combined for 128 Kbps. (Special services can combine more channels, such as a 6 channel 384 Kbps configuration for videoconferencing.)

The user’s PC is connected via a digital adapter rather than the usual analog to-digital modem. The most common telephone-based broadband system today is the digital subscriber line (see DSL ). Unlike ISDN, DSL uses existing phone lines.

A typical DSL speed today is 1–2 Mbps, though higher speed services up to about 5 Mbps are now being offered. The main drawback of DSL is that the transmission rate falls off with the distance from the telephone company’s central office, with a maximum distance of about 18,000 feet (5,486.4 m).

The primary alternative for most consumers uses existing television cables (cable modem). Cable is generally a bit faster (1.5–3 Mbps) than DSL, with premium service of up to 8 Mbps or so available in certain areas. However, cable speed slows down as more users are added to a given circuit. With both DSL and cable upload speeds (the rate at which data can be sent from the user to an Internet site) are generally fixed at a fraction of download speed (often about 128 kbps). While this “throttling” of upload speed does not matter much for routine Web surfing, the growing number of applications that involve users uploading videos or other media for sharing over the Internet (user-created content) has led to some pressure for higher upload speeds.

Ultra Broadband Rather surprisingly, the country that brought the world the Internet has fallen well behind many other industrialized nations in broadband speed. In Japan, DSL speeds up to 40 Mbps are available, and at less cost than in the United States. South Korea also offers “ultra broadband” speeds of 20 Mbps or more.

American providers, on the other hand, have tended to focus on expanding their networks and competing for market share rather than investing in higher speed technologies. However, this situation is beginning to improve as American providers ramp up their investment in fiber networks (fiber optics).

For example, in 2005 Verizon introduced Fios, a fiber-based DSL service that can reach speeds up to 15 Mbps. However, installing fiber networks is expensive, and as of 2007 it was available in only about 10 percent of the U.S. market.

Cable and phone companies typically offer Internet and TV as a package—many are now including long-distance phone service (and even mobile phone service) in a “triple play” package. (For long-distance phone carried via Internet, voip).

Wireless Broadband The first wireless Internet access was provided by a wireless access point (WAP), typically connected to a wired Internet router. This is still the most common scenario in homes and public “hot spots” ( Internet cafés and “hot spots”). However, with many people spending much of their time with mobile devices (see laptop, PDA , and smartphone), the need for always-accessible wireless connectivity at broadband speeds has been growing. The largest U.S. service, Nextlink, offered wireless broadband in 37 markets in 2007 (including many large and mid-sized cities) at speeds starting at 1.5 Mbps. An alternative is offered by cell phone companies such as Verizon and Sprint, which “piggy back” on the existing infrastructure of cell phone towers. However, the speed of this “3G” service is slower, from 384 kbps up to 2 Mbps.

wimaxYet another alternative beginning to appear is WiMAX, a technology that is conceptually similar to Wifi but has much greater range because its “hot spots” can be many miles in diameter. WiMAX offers the possibility of covering entire urban areas with broadband service, although questions about its economic viability have slowed implementation as of 2008.

Satellite Internet services have the advantage of being available over a wide area. The disadvantage is that there is about a quarter-second delay for the signal to travel from a geostationary satellite at an altitude of 22,300 km. (Loweraltitude satellites can be used to reduce this delay, but then more satellites are needed to provide continuous coverage.)

Adoption and Applications By mid-2007, 53 percent of adult Americans had a broadband connection at home. This amounts to 72 percent of home Internet users. (About 61 percent of broadband connections used cable and about 37 percent DSL.)

With dial-up connections declining to less than 25 percent, Web services are increasingly designed with the expectation that users will have broadband connections. This, however, has the implication that users such as rural residents and the inner-city poor may be subjected to a “second class” Web experience (see also digital divide).

Meanwhile, as with connection speed, many other countries now surpass the United States in the percentage of broadband users. Broadband Internet access is virtually a necessity for many of the most innovative and compelling of today’s Internet applications. These include downloading media (podcasting, streaming, and music and video distribution, online), uploading photos or videos to sites such as Flickr and YouTube, using the Internet as a substitute for a traditional phone line (voip), and even gaming (online games). Broadband is thus helping drive the integration of many forms of media (see digital convergence) and the continuous connectivity that an increasing number of people seem to be relying on (ubiquitous computing).

Source: http://cyberleague.wordpress.com/2013/01/27/broadband/

Spectrum Interference Standards: Seeking a Win-Win Rebound from Lose-Lose

8 Jan

Based upon lessons learned from the LightSquared situation, the author identifies important considerations for GPS spectrum interference standards, recommended by the PNT EXCOM for future commercial proposals in bands adjacent to the RNSS band to avoid interference to GNSS.

On January 13, 2012, the U.S. National Positioning, Navigation, and Timing Executive Committee (PNT EXCOM) met in Washington, D.C., to discuss the latest round of testing of the radiofrequency compatibility between GPS and a terrestrial mobile broadband network proposed by LightSquared. The proposed network included base stations transmitting in the 1525 – 1559 MHz band and handsets transmitting in the 1626.5 – 1660.5 MHz band. These bands are adjacent to the 1559 – 1610 MHz radionavigation satellite service (RNSS) band used by GPS and other satellite navigation systems. Based upon the test results, the EXCOM unanimously concluded that “both LightSquared’s original and modified plans for its proposed mobile network would cause harmful interference to many GPS receivers,” and that further “there appear to be no practical solutions or mitigations” to allow the network to operate in the near-term without resulting in significant interference.

tower-W

Typical cellular base-station tower.

The LightSquared outcome was a lose-lose in the sense that billions were spent by the investors in LightSquared and, as noted by the EXCOM, “substantial federal resources have been expended and diverted from other programs in testing and analyzing LightSquared’s proposals.” To avoid a similar situation in the future, the EXCOM proposed the development of “GPS Spectrum interference standards that will help inform future proposals for non-space, commercial uses in the bands adjacent to the GPS signals and ensure that any such proposals are implemented without affecting existing and evolving uses of space-based PNT services.”

This article identifies and describes several important considerations in the development of GPS spectrum interference standards towards achieving the stated EXCOM goals. These include the identification of characteristics of adjacent band systems and an assessment of the susceptibility of all GPS receiver types towards interference in adjacent bands. Also of vital importance to protecting GPS receivers is an understanding of the user base, applications, and where the receivers for each application may be located while in use. This information, along with the selection of proper propagation models, allows one to establish transmission limits on new adjacent-band systems that will protect currently fielded GPS receivers. The article further comments on the implications of the evolution of GPS and foreign satellite navigation systems upon the development of efficacious spectrum interference standards.

Adjacent Band Characteristics

The type of adjacent-band system for which there is currently the greatest level of interest is a nationwide wireless fourth-generation (4G) terrestrial network to support the rapidly growing throughput demands of personal mobile devices. Such a nationwide network would likely consist of tens of thousands of base stations distributed throughout the United States and millions of mobile devices. The prevalent standard at the present time is Long Term Evolution (LTE), which is being deployed by all of the major U.S. carriers. LTE and Advanced LTE provide an efficient physical layer for mobile wireless services. Worldwide Interoperability for Microwave Access (WiMAX) is a competing wireless communication standard for 4G wireless that is a far-distant second in popularity.

For the purposes of the discussion within this article, an LTE network is assumed with characteristics similar to that proposed by LightSquared but perhaps with base stations and mobile devices that transmit upon different center frequencies and bandwidths. The primary characteristics include:

  • Tens of thousands of base stations nationwide, reusing frequencies in a cellular architecture, with the density of base stations peaking in urban areas.
  • Base-station antennas at heights from sub-meter to 150 meters above ground level (AGL), with a typical height of 20–30 meters AGL. Each base station site has 1–3 sector antennas mounted on a tower such that peak power is transmitted at a downtilt of 2–6 degrees below the local horizon, with a 60–70 degree horizontal 3-dB beamwidth and 8–9 degree vertical 3-dB beamwidth.
  • Peak effective isotropic radiated power (EIRP) in the vicinity of 20–40 dBW (100–10,000 W) per sector.
  • Mobile devices transmit at a peak EIRP of around 23 dBm (0.2 W), but substantially lower most of the time when lower power levels suffice to achieve a desired quality of service as determined using real-time power control techniques.
  • As LTE uses efficient transmission protocols, emissions can be accurately modeled as brickwall, that is, confined to a finite bandwidth around the carrier.

Throughout this article it will be presumed that LTE emissions in the bands authorized for RNSS systems such as GPS will be kept sufficiently low through regulatory means.

The opening photo shows a typical base-station tower, with three sectors per cellular service provider and with multiple service providers sharing space on the tower, including non-cellular fixed point microwave providers. As a cellular network is being built out, coverage is at first most important, and many base-station sites will use minimum downtilt and peak EIRPs within the ranges described above. As the network matures, capacity becomes more important. High-traffic cells are split through the introduction of more base stations, and this is commonly accompanied by increased downtilts and lower EIRPs.

The assumed characteristics for adjacent band systems plays a paramount role in determining compatibility with GPS, and obviously lower-power adjacent-band systems would be more compatible. If compatibility with GPS precludes 4G network implementation on certain underutilized frequencies adjacent to RNSS bands, then it may be prudent to refocus attention for these bands on alternative lower-power systems.

GPS Receiver Susceptibility

Over the past two years, millions of dollars have been expended to measure or analyze the susceptibility of GPS receivers to adjacent band interference as part of U.S. regulatory proceedings for LightSquared. Measurements were conducted through both radiated (see photo) and conducted tests at multiple facilities, as well as in a live-sky demonstration in Las Vegas. This section summarizes the findings for seven categories of GPS receivers. These categories, which were originally identified in the Federal Communications Commission (FCC)-mandated GPS-LightSquared Technical Working Group (TWG) formed in February 2011, are: aviation, cellular, general location/navigation, high-precision, timing, networks, and space-based receivers.

Aviation. Certified aviation GPS receivers are one of the few receiver types for which interference requirements exist. These requirements take the form of an interference mask (see Figure 1) that is included in both domestic and international standards. Certified aviation GPS receivers must meet all applicable performance requirements in the presence of interference levels up to those indicated in the mask as a function of center frequency. In Figure 1 and throughout this article, all interference levels are referred to the output of the GPS receiver passive-antenna element. Although the mask only spans 1500–1640 MHz, within applicable domestic and international standards the curves are defined to extend over the much wider range of frequencies from 1315 to 2000 MHz.

DSCN5526

Radiated testing of GPS receiver susceptibility to LightSquared emissions within an anechoic chamber at White Sands Missile Range (courtesy of the United States Air Force).

Figure 1.  Certified aviation receiver interference mask.

Figure 1. Certified aviation receiver interference mask.

A handful of aviation GPS receivers were tested against LightSquared emissions in both conducted and radiated campaigns. The results indicated that these receivers are compliant with the mask with potentially some margin. However, the Federal Aviation Administration (FAA) noted the following significant limitations of the testing:

  • Not all receiver performance requirements were tested.
  • Only a limited number of certified receivers were tested, and even those tested were not tested with every combination of approved equipment (for example, receiver/antenna pairings).
  • Tests were not conducted in the environmental conditions that the equipment was certified to tolerate (for example, across the wide range of temperatures that an airborne active antenna experiences, and the extreme vibration profile that is experienced by avionics upon some aircraft).

Due to these limitations, the FAA focused attention upon the standards rather than the test results for LightSquared compatibility analyses, and these standards are also recommended for use in the development of national GPS interference standards. One finding from the measurements of aviation receivers that may be useful, however, is that the devices tested exhibited susceptibilities to out-of-band interference that were nearly constant as a function of interference bandwidth. This fact is useful since the out-of-band interference mask within aviation standards is only defined for continuous-wave (pure tone) interference, whereas LightSquared and other potential adjacent-band systems use signals with bandwidths of 5 MHz or greater.

Cellular. The TWG tested 41 cellular devices supplied by four U.S. carriers (AT&T, Sprint, US Cellular, and Verizon) against LightSquared emissions in the late spring/early summer of 2011. At least one of the 41 devices failed industry standards in the presence of a 5- or 10-MHz LTE signal centered at 1550 MHz at levels as low as –55 dBm, and at least one failed for a 10-MHz LTE signal centered at 1531 MHz at levels as low as –45 dBm. The worst performing cellular devices were either not production models or very old devices, and if the results for these devices are excluded, then the most susceptible device could tolerate a 10-MHz LTE signal centered at 1531 MHz at power levels of up to –30 dBm. Careful retesting took place in the fall of 2011, yielding a lower maximum susceptibility value of –27 dBm under the same conditions.

General Location/Navigation. The TWG effort tested 29 general location/navigation devices. In the presence of a pair of 10-MHz LTE signals centered at 1531 MHz and 1550 MHz, the most susceptible device experienced a 1-dB signal-to-noise ratio (SNR) degradation when each LTE signal was received at –58.9 dBm. In the presence of a single 10-MHz LTE signal centered at 1531 MHz, the most susceptible device experienced a 1-dB SNR degradation when the interfering signal was received at –33 dBm.

Much more extensive testing of the effects of a single LTE signal centered at 1531 MHz on general location/ navigation devices was conducted in the fall of 2011, evaluating 92 devices. The final report on this campaign noted that 69 of the 92 devices experienced a 1-dB SNR decrease or greater when “at an equivalent distance of greater than 100 meters from the LightSquared simulated tower.” Since the tower was modeled as transmitting an EIRP of 62 dBm, the 100-meter separation is equivalent to a received power level of around –14 dBm. The two most susceptible devices experienced 1-dB SNR degradations at received power levels less than –45 dBm.

High Precision, Timing, Networks. The early 2011 TWG campaign tested 44 high-precision and 13 timing receivers. 10 percent of the high-precision (timing) devices experienced a 1-dB or more SNR degradation in the presence of a 10-MHz LTE signal centered at 1550 MHz at a received power level of –81 dBm (–72 dBm). With the 10-MHz LTE signal centered at 1531 MHz, this level increased to –67 dBm (–39 dBm).

The reason that some high-precision GPS receivers are so sensitive to interference in the 1525–1559 MHz band is that they were built with wideband radiofrequency front-ends to intentionally process both GPS and mobile satellite service (MSS) signals. The latter signals provide differential GPS corrections supplied by commercial service providers that lease MSS satellite transponders, from companies including LightSquared.

Space. Two space-based receivers were tested for the TWG study. The first was a current-generation receiver, and the second a next-generation receiver under development. The two receivers experienced 1-dB C/A-code SNR degradation with total interference power levels of –59 dBm and –82 dBm in the presence of two 5-MHz LTE signals centered at 1528.5 MHz and 1552.7 MHz. For a single 10-MHz LTE signal centered at 1531 MHz, the levels corresponding to a 1-dB C/A-code SNR degradation increased to –13 dBm and –63 dBm. The next-generation receiver was more susceptible to adjacent-band interference because it was developed to “be reprogrammed in flight to different frequencies over the full range of GNSS and augmentation signals.”

Discussion. Although extensive amounts of data were produced, the LightSquared studies are insufficient by themselves for the development of GPS interference standards, since they only assessed the susceptibility of GPS receivers to interference at the specific carrier frequencies and with the specific bandwidths proposed by LightSquared. If GPS interference standards are to be developed for additional bands, then much more comprehensive measurements will be necessary.

Interestingly, NTIA in 1998 initiated a GPS receiver interference susceptibility study, funded by the Department of Defense (DoD) and conducted by DoD’s Joint Spectrum Center. One set of curves produced by the study is shown in Figure 2. This format would be a useful output of a further measurement campaign. The curves depict the interference levels needed to produce a 1-dB SNR degradation to one GPS device as the bandwidth and center frequency of the interference is varied. The NTIA curves only extended from GPS L1 (1575.42 MHz) ± 20 MHz. A much wider range would be needed to develop GPS interference standards as envisioned by the PNT EXCOM. It may be possible, to minimize testing, to exclude certain ranges of frequencies corresponding to bands that stakeholders agree are unlikely to be repurposed for new (for example, mobile broadband) systems.

Figure 2  Example of NTIA-initiated receiver susceptibility measurements from 1998.

Figure 2. Example of NTIA-initiated receiver susceptibility measurements from 1998.

Receiver-Transmitter Proximity

The LightSquared studies, with the exception of those focused on aviation and space applications, spent far less attention to receiver-transmitter proximity. Minimum separation distances and the associated geometry are obviously very important towards determining the maximum interference level that might be expected for a given LTE network (or other adjacent band system) laydown.

Within the TWG, the assumption generally made for other (non-aviation, non-space) GPS receiver categories was that they could see power levels that were measured in Las Vegas a couple of meters above the ground from a live LightSquared tower. Figure 3 shows one set of received power measurements from Las Vegas. In the figure, the dots are measured received power levels made by a test van. The top curve is a prediction of received power based upon the free-space path-loss model. The bottom curve is a prediction based upon the Walfisch-Ikegami line-of-sight (WILOS) propagation model. The NPEF studies presumed that the user could be within the boresight of a sector antenna even within small distances of the antenna (where the user would need to be at a significant height above ground).

Figure-5

Figure 3 Measurements of received power levels from one experimental LightSquared base station sector in Las Vegas live-sky testing.

The difference between the above received LTE signal power assumptions has been hotly debated, especially after LightSquared proposed limiting received power levels from the aggregate of all transmitting base stations as measured a couple of meters above the ground in areas accessible to a test vehicle. After summarizing the aviation scenarios developed by the FAA, this section highlights scenarios where so-called terrestrial GPS receivers can be at above-ground heights well over 2 meters. The importance of accurately understanding transmitter-receiver proximity is illustrated by Figure 4. This shows predicted received power levels for one LTE base station sector transmitting with an EIRP of 30 dBW and with an antenna height of 20 meters (65.6 feet). The figure was produced assuming the free-space path-loss model and a typical GPS patch-antenna gain pattern for the user. Note that maximum received power levels are very sensitive to the victim GPS receiver antenna height.

Figure 4  Received power in dBm at the output of a GPS patch antenna from one 30 dBW EIRP LTE base station sector at 20 meters.

Figure 4. Received power in dBm at the output of a GPS patch antenna from one 30 dBW EIRP LTE base station sector at 20 meters.

Aviation. The first LightSquared-GPS study conducted for civil aviation was completed by the Radio Technical Commission for Aeronautic (RTCA) upon a request from the FAA. Due to the extremely short requested turnaround time (3 months), RTCA consciously decided not to devote any of the available time developing operational scenarios, but rather re-used scenarios that it had developed for earlier interference studies. It was later realized that the combination of five re-used scenarios and assumed LightSquared network characteristics did not result in an accurate identification of the most stressing real-world scenarios. For instance, within the RTCA report, base stations’ towers were all assumed to be 30 meters in height. At this height, towers could not be close to runway thresholds where aircraft are flying very low to the ground, because this situation would be precluded by obstacle clearance surfaces. Later studies used actual base-station locations, from which the aviation community became aware that cellular service providers do place base stations close to airports by utilizing lower base-station heights as necessary to keep the antenna structure just below obstacle clearance surfaces.

The FAA completed an assessment of LightSquared-GPS compatibility in January 2012 that identified scenarios where certified aviation receivers could experience much higher levels of interference than was assessed in the RTCA report. The areas where fixed-wing and rotary-wing aircraft rely on GPS are depicted in Figures 5 and 6 (above the connected line segments), respectively.

Figure-7

Figure 5. Area where GPS use must be sssured for fixed-wing aircraft.

Figure-8

Figure 6. Area where GPS use must be assured for rotary-wing aircraft.

Aircraft rely upon GPS for navigation and Terrain Awareness and Warning Systems (TAWS). Helicopter low-level en-route navigation and TAWS for fixed- and rotary-wing aircraft are perhaps the most challenging scenarios for ensuring GPS compatibility with adjacent-band cellular networks. In these scenarios, the aircraft can be within the boresight of cellular sector antennas and in very close proximity, resulting in very high received-power levels. The FAA attempted to provide some leeway for LightSquared while maintaining safe functionality of TAWS through the concept of exclusion zones (see Figure 7). The idea of an exclusion zone is that, at least for cellular base-station transmitters on towers that are included within TAWS databases, that it would be permitted for the GPS function to not be available for very small zones around the LTE base-station tower. This concept is currently notional only; the FAA plans to more carefully evaluate the feasibility of this concept and appropriate exclusion-zone size with the assistance of other aviation industry stakeholders.

Figure-9

Figure 7. Example exclusion area around base station to protect TAWS.

High-precision and Networks: Reference Stations. To gain insight into typical reference-station heights for differential GPS networks, the AGL heights of sites comprising the Continuously Operating Reference Station (CORS) network organized by the National Geodetic Survey (NGS) were determined. The assessment procedure is detailed in the Appendix.

Figure 8 portrays a histogram of estimated AGL heights for the 1543 operational sites within the continental United States (CONUS) as of February 2012. The accuracy of the estimated AGL heights is on the order of 16 meters, 90 percent, limited primarily by the quality of the terrain data that was utilized. The mean and median site heights are 5.7 and 5.2 meters, respectively.

Figure 8. Distribution of heights for CORS sites.

Figure 8. Distribution of heights for CORS sites.

RALR, atop the Archdale Building in Raleigh, North Carolina, was the tallest identified site at 64.1 meters. This site, however, was decommissioned in January 2012 (although it was identified as operational in a February 2012 NGS listing of sites). The second tallest site identified is WVHU in Huntington, West Virginia at 39.6 meters, which is still operational atop of a Marshall University building. 223 of the 1543 CORS sites within CONUS have AGL heights greater than 10 meters, and furthermore the taller sites tend to be in urban areas where cellular networks tend to have the greatest base-station density.

High Precision and Networks: End Users. Many high-precision end users employ GPS receivers at considerable heights above ground. For instance, high-precision receivers are relied upon within modern construction methods. The adjacent photos show GPS receivers used for the construction of a 58-story skyscraper called The Bow in Calgary, Canada. For this project, a rooftop control network was established on top of neighboring buildings using both GPS receivers and other surveying equipment (for example, 360-degree prisms for total stations), and GPS receivers were moved up with each successive stage of the building to keep structural components plumb and properly aligned. Similar techniques are being used for the Freedom Tower, the new World Trade Center, in New York City, and many other current construction projects.

Other terrestrial applications that rely on high-precision GPS receivers at high altitudes include structural monitoring and control of mechanical equipment such as gantry cranes. At times, even ground-based survey receivers can be substantially elevated. Although a conventional surveying pole or tripod typically places the GPS antenna 1.5 – 2 meters above the ground, much longer poles are available and occasionally used in areas where obstructions are present. 4-meter GPS poles are often utilized, and poles of up to 40 ft (12.2 meters) are available from survey supply companies.

General Location/Navigation. Although controlling received power from a cellular network at 2 meters AGL may be suitable to protect many general navigation/location users, it is not adequate by itself. For example, GPS receivers are used for tracking trucks and for positive train control (the latter mandated in the United States per the Rail Safety Improvement Act of 2008). GPS antennas for trucks and trains are often situated on top of these vehicles. Large trucks in the United States for use on public roads can be up to 13 ft, 6 in (~4.1 meters), and a typical U.S. locomotive height is 15 ft, 5 in (~4.7 meters). Especially in a mature network that is using high downtilts, received power at these AGL heights can be substantially higher than at 2 meters.

Within the TWG and NPEF studies, the general location/navigation GPS receiver category is defined to include non-certified aviation receivers. One notable application is the use of GPS to navigate unmanned aerial vehicles. UAVs are increasingly being used for law enforcement, border control, and many other applications where the UAV can be expected to occasionally pass within the boresight of cellular antennas at short ranges.

Cellular. The majority of Americans own cell phones, and a growing number are using cell phones as a replacement for landlines within their home. Already, 70 percent of 911 calls are made on mobile phones. Although pedestrians and car passengers are often within 2 meters of the ground, this is not always the case. Figure 9 shows three cellular sector antennas situated atop a building filled with residential condominiums. The rooftop is accessible and frequently used by the building inhabitants. According to an online real estate advertisement, “The Garden Roof was voted the Best Green Roof in Town and provides amazing 360 degree views of downtown Nashville as well as four separate sitting areas and fabulous landscaping.” One of the sector antennas is pointing towards the opposite corner of the building. If the downtilt is in the vicinity of 2–6 degrees, then it is quite likely that a person making a 911 call from the rooftop could see a received power level of –10 dBm to 0 dBm, high enough to disrupt GPS within most cellular devices if the antennas were transmitting in the 1525–1559 MHz band.

Figure 9. Cellular antennas atop Westview Condominium Building in downtown Nashville.

Figure 9. Cellular antennas atop Westview Condominium Building in downtown Nashville.

This situation is not unusual. Many cellular base stations are situated on rooftops in urban areas, and many illuminate living areas in adjacent buildings. In recent years, New York City even considered legislation to protect citizens from potential harmful effects of the more than 2,600 cell sites in the city, since many sites are in very close proximity to residential areas.

Propagation Models

Within the LightSquared proceedings, there was a tremendous amount of debate regarding propagation models. Communication-system service providers typically use propagation models that are conservative in their estimates of received power levels in the sense that they overestimate propagation losses. This conservatism is necessary so that the service can be provided to end users with high availability. From the standpoint of potential victims of interference, however, it is seen as far more desirable to underestimate propagation losses so that interference can be kept below an acceptable level a very high percentage of time. As shown in Figure 3, some received power measurements from the Las Vegas live-sky test indicate values even greater than would be predicted using free-space propagation model. Statistical models that allow for this possible were used in the FAA Status Report. The general topic of propagation models is worthy of future additional study if GPS interference standards are to be developed.

Future Considerations

GPS is being modernized. Additionally, satellite navigation users now enjoy the fact that the Russian GLONASS system has recently returned to full strength with the repopulation of its constellation. In the next decade, satellite navigation users also eagerly anticipate the completion of two other global GNSS constellations: Europe’s Galileo and China’s Compass. Notably, between the GPS modernization program and the deployment of these other systems, satellite navigation users are expected to soon be relying upon equipment that is multi-frequency and that needs to process many more signals with varied characteristics. New equipment offers an opportunity to insert new technologies such as improved filtering, but of course the need to process additional signals and carrier frequencies may make GNSS equipment more susceptible to interference as well. Clearly, these developments will need to be carefully assessed to support the establishment of GPS spectrum interference standards.

Summary

This article has identified a number of considerations for the development of GPS interference standards, which have been proposed by the PNT EXCOM. If the United States proceeds with the development of such standards, it is hoped that the information within this article will prove useful to those involved.

Bow highrise  under construction in Calgary, showing GPS receivers in use (photos courtesy Rocky Annett, MMM Group Ltd.)

Bow highrise under construction in Calgary, showing GPS receivers in use (photos courtesy Rocky Annett, MMM Group Ltd.)

Bow highrise  under construction in Calgary, showing GPS receivers in use (photos courtesy Rocky Annett, MMM Group Ltd.)

(Photo courtesy of Rocky Annett, MMM Group Ltd.)

Bow highrise  under construction in Calgary, showing GPS receivers in use (photos courtesy Rocky Annett, MMM Group Ltd.)

(Photo courtesy of Rocky Annett, MMM Group Ltd.)

 

Appendix: AGL Heights of CORS Network Sites

The National Geodetic Survey Continuously Operating Reference Station (CORS) website provides lists of CORS site locations in a number of different reference frames. To determine the height above ground level (Screen shot 2013-01-07 at 12.35.25 PM) for each site within this study, two of these files (igs08_xyz_comp.txt and igs08_xyz_htdp.txt) were used. These two files provide the (x,y,z) coordinates of the antenna reference point (ARP) for each site in the International GNSS Service 2008 (IGS08) reference frame, which is consistent with the International Terrestrial Reference Frame (ITRF) of 2008. These coordinates are divided into two files by NGS, since the site listings also provide site velocities and velocities are either computed (for sites that have produced data for at least 2.5 years) or estimated (for newer sites). Thecomp file includes sites with computed velocities and the htdp file includes sites with estimated velocities (using a NGS program known as HTDP).

The data files can be used to readily produce height above the ellipsoid, Screen shot 2013-01-07 at 12.35.17 PM, for each site. This height can be found using well-known equations to convert from (x, y, z) to (latitude, longitude, height). Obtaining estimates of Screen shot 2013-01-07 at 12.35.25 PM requires information on the geoid height and terrain data, per the relationship:

Screen shot 2013-01-07 at 12.35.31 PM  (A-1)

For the results presented in this article, terrain data was obtained from http://earthexplorer.usgs.gov in the Shuttle Radar Topography Mission (SRTM) Digital Terrain Elevation Data (DTED) Level 2 format. For this terrain data, the horizontal datum is the World Geodetic System (WGS 84). The vertical datum is Mean Sea Level (MSL) as determined by the Earth Gravitational Model (EGM) 1996. Each data file covers a 1º by 1º degree cell in latitude/longitude, and individual points are spaced 1 arcsec in both latitude and longitude. The SRTM DTED Level 2 has a system design 16 meter absolute vertical height accuracy, 10 meters relative vertical height accuracy, and 20 meter absolute horizontal circular accuracy. All accuracies are at the 90 percent level. Considering the accuracies of the DTED data, the differences between WGS-84 and IGS08 as well as between the ARP and antenna phase center were considered negligible. Geoid heights were interpolated from 15-arcmin data available in the MATLAB Mapping Toolbox using the egm96geoid function.

Lower AGL heights are preferred for CORS sites to minimize motion between the antenna and the Earth’s crust. However, many sites are at significant heights above the ground by necessity, particularly in urban areas due to the competing desire for good sky visibility.

Source: http://www.gpsworld.com/spectrum-interference-standards-seeking-a-win-win-rebound-from-lose-lose/

Full spectrum millimeter-wave modulation

2 Oct

Abstract

In recent years, the development of new lithium niobate electro-optic modulator designs and material processing techniques have contributed to support the increasing need for faster optical networks by considerably extending the operational bandwidth of modulators. In an effort to provide higher bandwidths for future generations of networks, we have developed a lithium niobate electro-optic phase modulator based on a coplanar waveguide ridged structure that operates up to 300 GHz. By thinning the lithium niobate substrate down to less than 39 µm, we are able to eliminate substrate modes and observe optical sidebands over the full millimeter-wave spectrum.

© 2012 OSA

1. Introduction

With the number of multimedia services, wireless access, internet devices and mobile users constantly growing, signal processing techniques such as time division multiplexing S. Kawanishi, “Ultrahigh-speed optical time-division-multiplexed transmission technology based on optical signal processing,” IEEE J. Quantum Electron.  34(11), 2064–2079 (1998). [CrossRef]] (TDM) and wavelength division multiplexing C. A. Brackett, “Dense wavelength division multiplexing networks: principles and applications,” IEEE J. Sel. Areas Comm.  8(6), 948–964 (1990). [CrossRef]] (WDM) have been developed to extend the operational bandwidth of existing optical networks. As a result, 10 Gb/s and 40 Gb/s optical networks are now standard, whereas 100 Gb/s network are being tested and implemented J. Mcdonough, “Moving standards to 100 GbE and beyond,” IEEE Commun. Mag.  45(11), 6–9 (2007). [CrossRef]]. However, those methods have limitations and one solution toward increasing the bandwidth of the networks is to accelerate the data transmission speed. This solution requires the development of a number of components, one of which is the modulator. Ultrahigh speed modulators are key components in the development of optical fiber networks as they set the transmission capacity from the electrical to the optical domain. A variety of modulators operating into the millimeter-wave (mmW) range have previously been developed for the telecommunication market E. L. Wooten, K. M. Kissa, A. Yi-Yan, E. J. Murphy, D. A. Lafaw, P. F. Hallemeier, D. Maack, D. V. Attanasio, D. J. Fritz, G. J. McBrien,  and D. E. Bossi, “A review of lithium niobate modulators for fiber-optic communications systems,” IEEE J. Sel. Top. Quantum Electron.  6(1), 69–82 (2000). [CrossRef] G. L. Li and P. K. L. Yu, “Optical intensity modulators for digital and analog applications,” J. Lightwave Technol.  21(9), 2010–2030 (2003). [CrossRef]]. In addition, interest in mmW imaging has also contributed to the support and development of mmW modulators C. A. Schuetz, J. Murakowski, G. J. Schneider,  and D. W. Prather, “Radiometric Millimeter-wave detection via optical upconversion and carrier suppression,” IEEE Trans. Microw. Theory Tech.  53(5), 1732–1738 (2005). [CrossRef]]. Electro-optic (EO) polymer modulators Y. Shi, C. Zhang, H. Zhang, J. H. Bechtel, L. R. Dalton, B. H. Robinson,  and W. H. Steier, “Low (Sub-1-volt) halfwave voltage polymeric electro-optic modulators achieved by controlling chromophore shape,” Science  288(5463), 119–122 (2000). [CrossRef][PubMed]] and electroabsorption (EA) modulators G. L. Li, C. K. Sun, S. A. Pappert, W. X. Chen,  and P. K. L. Yu, “Ultrahigh-speed traveling-wave electroabsorption modulator-design and analysis,” IEEE Trans. Microw. Theory Tech.  47(7), 1177–1183 (1999). [CrossRef]] have shown the ability to operate in the mmW region, but lithium niobate (LiNbO3) EO modulators possess several advantages over the others. They can operate at very high bandwidths, offer very small frequency chirp, handle high optical power, have low optical loss, require relatively low drive voltages because of a high EO coefficient and are very stable over time. Moreover, the LiNbO3 EO modulator is a mature technology that is widely used in the current optical network infrastructure and it has shown strong optical response up to 110 GHz K. Noguchi, O. Mitomi,  and H. Miyazawa, “Millimeter-wave Ti:LiNbO3 optical modulators,” J. Lightwave Technol.  16(4), 615–619 (1998). [CrossRef]J. Macario, P. Yao, R. Shireen, C. A. Schuetz, S. Shi,  and D. W. Prather, “Development of electro-optic phase modulator for 94 GHz imaging system,” J. Lightwave Technol.  27(24), 5698–5703 (2009). [CrossRef]]. In this paper, we present a LiNbO3 EO phase modulator that operates over the entire mmW region, with sidebands up to 300 GHz demonstrated.

2. Device design and fabrication

In LiNbO3 EO phase modulator design, the modulating radio frequency (RF) signal interacts with the optical signal to create sidebands on the optical carrier. To maximize this interaction, we designed a ridged coplanar waveguide (CPW) electrode on top of a Ti-diffused optical waveguide to support the RF signal O. Mitomi, K. Noguchi,  and H. Miyazawa, “Design of ultra-broad-band LiNbO3 optical modulators with ridge structure,” IEEE Trans. Microw. Theory Tech.  43(9), 2203–2207 (1995). [CrossRef]K. Noguchi, O. Mitomi, H. Miyazawa,  and S. Seki, “Broadband Ti:LiNbO3 optical modulator with a ridge structure,” J. Lightwave Technol.  13(6), 1164–1168 (1995). [CrossRef]]. To efficiently convert the electrical energy into optical energy and create sidebands, five main criteria need to be optimized. First, the mmW effective index has to be reduced from ~6 down to the optical effective index of 2.19 in order to maximize the copropagating interaction of the modulation and optical signal K. Aoki, J. Kondou, O. Mitomi,  and M. Minakata, “Velocity-matching conditions for ultrahigh-speed optical LiNbO3 modulators with traveling-wave electrode,” Jpn. J. Appl. Phys., Part 1  45, 8696–8698 (2006).]. Such index matching is reached by combining together a ridged CPW structure, thick electrodes and a silicon dioxide (SiO2) buffer layer between the electrodes and the LiNbO3 surface. The ridge structure and the buffer layer substitutes a high index material, LiNbO3, with low index materials, respectively air and SiO2 of index 1.46, whereas a thick electrodes CPW structure pulls the electric field into the air. Second, the dielectric and conduction losses need to be minimized. The dielectric loss depends greatly on the LiNbO3 substrate and the buffer layer properties, whereas the conduction loss is mostly determined by the CPW geometry and material K. Noguchi, H. Miyazawa,  and O. Mitomi, “Frequency-dependent propagation characteristics of coplanar waveguide electrode on 100GHz Ti:LiNbO3 optical modulator,” Electron. Lett.  34(7), 661–663 (1998). [CrossRef]]. Third, the CPW impedance must be matched to limit reflection losses when coupled to a standard 50Ω transmission line. Fourth, overlap between the RF and optical mode must be maximized. The mode overlap can be maximized by reducing, as much as possible, the gap between the RF electrode and the ground electrodes of the CPW. Finally, coupling of RF energy into substrate modes must be eliminated. The modes supported by the substrate are directly related to its thickness . P. Kasilingam and D. B. Rutledge, Surface-wave losses of coplanar transmission lines” in 1983 IEEE MTT-S International Microwave Symposium DigestAnonymous (IEEE, 1983). G. K. Gopalakrishnan, W. K. Burns,  and C. H. Bulmer, “Electrical loss mechanisms in travelling wave LiNbO3 optical modulators,” Electron. Lett.  28(2), 207–209 (1992). [CrossRef]]. As the frequency of operation increases, the RF wavelength decreases to the point where the RF modes are supported by the substrate, causing RF energy to leak out of the CPW structure into the substrate. If the substrate mode and the CPW mode propagate down the modulator at the same speed, those modes strongly interact with each other and deteriorate the electrical propagation properties of the CPW mode at that particular RF frequency. Therefore, the substrate needs to be thinned-down to prevent substrate modes at higher frequencies Y. Shi, “Micromachined wide-band lithium-niobate electrooptic Modulators,” IEEE Trans. Microw. Theory Tech.  54(2), 810–815 (2006). [CrossRef]J. Kondo, K. Aoki, A. Kondo, T. Ejiri, Y. Iwata, A. Hamajima, T. Mori, Y. Mizuno, M. Imaeda, Y. Kozuka, O. Mitomi,  and M. Minakata, “High-speed and low-driving-Voltage thin-sheet X-cut LiNbO3 Modulator with laminated low-dielectric-constant adhesive,” IEEE Photon. Technol. Lett.  17(10), 2077–2079 (2005). [CrossRef]]. The transmission parameter S21 has been measured for a modulator at two different substrate thicknesses to show the effect of the substrate modes on the propagating RF modes (Fig. 1). By thinning the LiNbO3 substrate down to 65 µm, the RF modes only start coupling into the substrate at 180 GHz, as opposed to about 70 GHz for a 500 µm thick substrate.

Fig. 1 Effect of substrate mode coupling on mmW electrical propagation properties. The transmission parameter S21 shows the importance of reducing the substrate’s thickness to eliminate the coupling of the RF signal into substrate modes and therefore enhance the transmission. With a substrate thickness reduced to 65 µm, substrate mode coupling is observed starting at 180 GHz.
Ultimately, a trade-off between those five main criteria is necessary to achieve optimal design for a given application. A numerical simulator based on the finite element method (FEM) is used to optimize the mode-overlapping between the RF signal and the optical signal as well as the impedance matching [20

C. J. Huang, C. Schuetz, R. Shireen, T. Hwang, S. Shi, and D. W. Prather, “Development of photonic devices for MMW sensing and imaging” in Intelligent Integrated MicrosystemsAnonymous (SPIE – The International Society for Optical Engineering, 2006).

]. The design of the modulator cross-section resulting from this analysis is represented in Fig. 2. The CPW thickness T, the central electrode width S, the electrode length L, the gap G, the buffer oxide thickness B, the ridge height H and the ridge width R are respectively 25 µm, 8 µm, 2 cm, 25 µm, 0.9 µm, 3.6 µm and 10.5 µm with a substrate thickness D equal or less than 39 µm Y. Shi, “Micromachined wide-band lithium-niobate electrooptic Modulators,” IEEE Trans. Microw. Theory Tech.  54(2), 810–815 (2006). [CrossRef]], which corresponds to the cutoff thickness of the substrate modes over the entire mmW region.

Fig. 2 Modulator cross-section. A ridged CPW structure is built on top of a Ti in-diffused waveguide. High aspect-ratio electrodes combined with a SiO2 buffer layer and a ridge structure allows effective index matching between optical and RF signals represented by E. A small gap G between the CPW electrodes leads to a strong mode-overlapping. A thinned LiNbO3 substrate underneath the CPW structure eliminates substrate modes over the full mmW range.
Following the design presented in Fig. 2, a set of sixty modulators has been fabricated on a 500 µm thick z-cut LiNbO3 wafer substrate using the following fabrication process. First, a titanium strip is diffused into the LiNbO3 substrate at around 1000°C for 10 hours to form the optical waveguide structure R. V. Schmidt and I. P. Kaminow, “Metal-diffused optical waveguides in LiNbO3,” Appl. Phys. Lett.  25(8), 458–460 (1974). [CrossRef]]. Second, the ridge structure is fabricated by dry etching the LiNbO3 material surrounding the optical waveguide using inductively-coupled plasma (ICP) reactive ion etching (RIE) technology in a chlorine environment Z. Ren, P. J. Heard, J. M. Marshall, P. A. Thomas,  and S. Yu, “Etching characteristics of LiNbO3 in reactive ion etching and inductively coupled plasma,” J. Appl. Phys.  103(3), 034109 (2008). [CrossRef]]. Third, a silicon dioxide buffer layer is deposited on the substrate using Plasma Enhanced Chemical Vapor Deposition (PECVD) technique, followed by a 6-hour annealing process at 600°C. Fourth, the high aspect-ratio CPW structure is defined by lithography using SU8-2015 photoresist from MicroChem, before electroplating the open surface using a gold solution to build up the CPW electrodes until the desired thickness. Fifth, the modulators are diced into small groups and each group’s end faces are polished. Sixth, each modulator is diced into a single chip and individually thinned down to the desired thickness. For this thinning process, a 400 µm wide groove is machined underneath the signal electrode over the entire length of the modulator. The two modulators reported later in this paper have both been thinned down below the 39 µm substrate mode cutoff thickness, one to 30 µm and another one to 20 µm. Finally, polarization maintained optical fibers are bonded to both end faces of a modulator using UV curable epoxy.

3. Experimental setup

The S-parameters, the optical sidebands and the DC-Vπ are the three different measurements used to characterize the modulator. Four sets of equipment are necessary to perform those measurements over the 300 GHz bandwidth, which is divided into four bandwidths, 0-110 GHz, 110-170 GHz, 170-220 GHz and 220-300 GHz. An Agilent E8361C Programmable Network Analyzer (PNA) is used as the main RF source. In the 0-110 GHz range, we use Agilent N5260 T/R modules, 1mm cable waveguides and corresponding GGB Industries probes. The RF power is measured using an Agilent E4418-B power meter. In the 110-300 GHz range, to each bandwidth corresponds a set of VNA extension modules from OML, Inc., Millitech rectangular waveguides and GGB Industries probes. An Erickson PM4 power meter is used to measure the RF power delivered in those regions. An EM4 laser centered at 1557.3 nm is used as the optical source. The optical sidebands are observed by connecting the modulator’s output fiber to a Yokogawa AQ6319 Optical Spectrum Analyzer (OSA). An external Mach-Zehnder interferometer configuration setup around the modulator is used to perform the half-wave voltage DC-Vπ measurement, which is determined by examining the interference pattern resulting from an applied voltage and measuring the voltage corresponding to a 180° optical phase shift.

4. Device characterization

Scanning Electron Microscope (SEM) pictures of the end face of a 30 µm thick modulator, before fiber bonding, in the CPW launching area are shown in Fig. 3. In the launching area, the width of the signal electrode is 25 µm and the gap between the signal electrode and the ground electrodes is 75 µm.

Fig. 3 SEM pictures of the modulator fabricated. (a) CPW gold plated structure with LiNbO3 etched on each side. (b) Overall view of the end face of the modulator before optical fiber bonding. The LiNbO3 substrate has been thinned to 30 µm by micromachining a groove under the CPW structure to eliminate the substrate modes over the entire mmW region.
The S21 parameter of the 30 µm thick modulator has been measured over a 280 GHz bandwidth and showed no sign of substrate mode coupling (Fig. 4).

Fig. 4 Measured transmission parameter S21 over the 280 GHz bandwidth. The transmission parameter S21 confirms that the substrate modes have been suppressed for a substrate thickness of 30 µm.
The electrical properties of another modulator, 20 µm thick, have been extracted from a full S-parameter characterization using a transmission line ABCD matrix curve fitting technique G. L. Li, T. G. B. Mason,  and P. K. L. Yu, “Analysis of segmented traveling-wave optical modulators,” J. Lightwave Technol.  22(7), 1789–1796 (2004). [CrossRef]]. The electrical parameters of the modulator, consisting of the conduction losses αm, dielectric losses αd, CPW input impedance Zin and effective index nRF, are first approximated to generate an ABCD matrix from which the S-parameters can be backed up and compared to the measured S-parameters. Using a curve fitting algorithm, the electrical parameters are adjusted until the S-parameters calculated and the S-parameters measured are matched. The electrical properties αm, αd, Zin and nRF have been estimated to be 0.28 dB/(cm·GHz1/2), 0.01 dB/(cm·GHz), 47 Ω and 2.19, respectively. An optical insertion loss of 3.7 dB has been obtained. The DC-Vπ has been directly measured at 8.6 V.
As a result of good index matching, low propagation losses, good input impedance and very thin substrate, first order optical sidebands for the 20 µm thick modulator have been observed up to 300 GHz on the OSA. The modulator’s response has been measured in 1 GHz increments. However, to better illustrate the sidebands, a modulation spectrum with a 5 GHz spacing between each RF frequency is represented in Fig. 5. In the figure, each pair of sidebands, upper and lower, corresponds to the modulator’s optical response normalized to the RF input power feeding the modulator at the corresponding RF frequency.

Fig. 5 300 GHz optical modulation spectrum. Each pair of sidebands centered around the optical carrier represents the mmW energy for a given mmW frequency upconverted to optical energy using the EO effect of the LiNbO3.
The probe and feed losses, harmonic generation in the RF source, and power meter limitations are the factors accounted for in the optical sidebands normalization process. The probe’s insertion loss are provided by the vendor, whereas the feed losses were determined by backing out the return losses from S11 parameter measurements. The harmonic generation in the RF sources was determined directly by measuring the corresponding optical sidebands. However, limits in the available power of the RF source above 170 GHz and the measurement capabilities of power meters at these frequencies inhibited the accurate characterization of the modulator beyond 170 GHz. The power meter measurements at the bandwidth transitions were adjusted, as necessary, to obtain a consistent and continuous modulation characterization.
The Vπ of the modulator can be extracted from the sidebands measurements through the relation V π (f)=πZ in (f)/(2P sb (f)) − − − − − − − − − − − − − √

C. J. Huang, C. A. Schuetz, R. Shireen, S. Shi, and D. W. Prather, “LiNbO3 optical modulator for MMW sensing and imaging” in Passive Millimeter-Wave Imaging Technology XAnonymous (SPIE – The International Society for Optical Engineering, 2007).

], where Zin(f) is the CPW characteristic impedance and Psb(f) is the power in Watts of the normalized optical sideband. The Vπ measured using the normalized optical sidebands and the Vπ calculated using the S-parameters measurements are represented in Fig. 6 in the 0-170 GHz range, which corresponds to the bandwidth where the equipment allows accurate sidebands normalization.

Fig. 6 Measured and calculated modulator half-wave voltage Vπ. The measured Vπ extracted from the sidebands measurements is in agreement with the Vπ calculated using the S-parameters.
The good agreement in Fig. 6 between the Vπ measured from the normalized sidebands and the Vπ calculated from the S-parameters shows that the normalization process used to characterize the modulator is correct and that the modulation spectrum shown in Fig. 5 is accurate in the 0-170 GHz range. A DC-Vπ on the order of 8.6 V is obtained from extrapolating down to the DC the measured Vπ represented in Fig. 6, which was also verified experimentally.

5. Conclusions

In conclusion, a LiNbO3 EO phase modulator operating over the entire mmW bandwidth has been experimentally demonstrated, showing continuous sidebands up to 300 GHz. The design of the modulator has been presented and its main design criteria discussed. A precise index matching at 2.19 combined with low propagation loss, good impedance matching and strong mode overlap proved to be critical but not sufficient to allow the modulation over the entire mmW domain. Thinning the LiNbO3 substrate to a few tens of microns to eliminate substrate modes is key to extending the modulation bandwidth to 300 GHz. A driving voltage Vπ of 8.6 V is obtained at DC. We believe the integration of such a ultrahigh speed modulator in the existing WDM optical fiber network will considerably increase the capacity of the current network at minimum cost.

View Full Text Article:  Acrobat PDF (1373 KB)

Source: http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-21-23623

%d bloggers like this: