Briefing: Flexible elastic-bandwidth networks
Vendors and service providers are implementing the first examples of flexible, elastic-bandwidth networks. Infinera and Microsoft detailed one such network at the Layer123 Terabit Optical and Data Networking conference held earlier this year.
Optical networking expert Ioannis Tomkos of the Athens Information Technology Center explains what is flexible, elastic bandwidth.
Part 1: Flexible elastic bandwidth

"We cannot design anymore optical networks assuming that the available fibre capacity is abundant"
Prof. Tomkos
Several developments are driving the evolution of optical networking. One is the incessant demand for bandwidth to cope with the 30+% annual growth in IP traffic. Another is the changing nature of the traffic due to new services such as video, mobile broadband and cloud computing.
"The characteristics of traffic are changing: A higher peak-to-average ratio during the day, more symmetric traffic, and the need to support higher quality-of-service traffic than in the past," says Professor Ioannis Tomkos of the Athens Information Technology Center.
"The growth of internet traffic will require core network interfaces to migrate from the current 10, 40 and 100Gbps to 1 Terabit by 2018-2020"
Operators want a more flexible infrastructure that can adapt to meet these changes, hence their interest in flexible elastic-bandwidth networks. The operators also want to grow bandwidth as required while making best use of the fibre's spectrum. They also require more advanced control plane technology to restore the network elegantly and promptly following a fault, and to simplify the provisioning of bandwidth.
The growth of internet traffic will require core network interfaces to migrate from the current 10, 40 and 100Gbps to 1 Terabit by 2018-2020, says Tomkos. Such bit-rates must be supported with very high spectral efficiencies, which according to latest demonstrations are only a factor of 2 away of the Shannon's limit. Simply put, optical fibre is rapidly approaching its maximum limit.
"We cannot design anymore optical networks assuming that the available fibre capacity is abundant," says Tomkos. "As is the case in wireless networks where the available wireless spectrum/ bandwidth is a scarce resource, the future optical communication systems and networks should become flexible in order to accommodate more efficiently the envisioned shortage of available bandwidth.”
The attraction of multi-carrier schemes and advanced modulation formats is the prospect of operators modifying capacity in a flexible and elastic way based on varying traffic demands, while maintaining cost-effective transport.
Elastic elements
Optical systems providers now realise they can no longer keep increasing a light path's data rate while expecting the signal to still fit in the standard International Telecommunication Union (ITU) - defined 50GHz band.
It may still be possible to fit a 200 Gigabit-per-second (Gbps) light path in a 50GHz channel but not a 400Gbps or 1 Terabit signal. At 400Gbps, 80GHz is needed and at 1 Terabit it rises to 170GHz, says Tomkos. This requires networks to move away from the standard ITU grid to a flexible-based one, especially if operators want to achieve the highest possible spectral efficiency.
Vendors can increase the data rate of a carrier signal by using more advanced modulation schemes than dual polarisation, quadrature phase-shift keying (DP-QPSK), the defacto 100Gbps standard. Such schemes include amplitude modulation at 16-QAM, 64-QAM and 256-QAM but the greater the amplitude levels used and hence the data rates, the shorter the resulting reach.
Another technique vendors are using to achieve 400Gbps and 1Tbps data rates is to move from a single carrier to multiple carriers or 'super-channels'. Such an approach boosts the data rate by encoding data on more than one carrier and avoids the loss in reach associated with higher order QAM. But this comes at a cost: using multiple carriers consumes more, precious spectrum.
As a result, vendors are looking at schemes to pack the carriers closely together. One is spectral shaping. Tomkos also details the growing interest in such schemes as optical orthogonal frequency division multiplexing (OFDM) and Nyquist WDM. For Nyquist WDM, the subcarriers are spectrally shaped so that they occupy a bandwidth close or equal to the Nyquist limit to avoid inter symbol interference and crosstalk during transmission.
Both approaches have their pros and cons, says Tomkos, but they promise optimum spectral efficiency of 2N bits-per-second-per-Hertz (2N bits/s/Hz), where N is the number of constellation points.
The attraction of these techniques - multi-carrier schemes and advanced modulation formats - is the prospect of operators modifying capacity in a flexible and elastic way based on varying traffic demands, while maintaining cost-effective transport.
"With flexible networks, we are not just talking about the introduction of super-channels, and with it the flexible grid," says Tomkos. "We are also talking about the possibility to change either dynamically."
According to Tomkos, vendors such as Infinera with its 5x100Gbps super-channel photonic integrated circuit (PIC) are making an important first step towards flexible, elastic-bandwidth networks. But for true elastic networks, a flexible grid is needed as is the ability to change the number of carriers on-the-fly.
"Once we have those introduced, in order to get to 1 Terabit, then you can think about playing with such parameters as modulation levels and the number of carriers, to make the bandwidth really elastic, according to the connections' requirements," he says.
Meanwhile, there are still technology advances needed before an elastic-bandwidth network is achieved, such as software-defined transponders and a new advanced control plane.
Tomkos says that operators are now using control plane technology that co-ordinates between layer three and the optical layer to reduce network restoration time from minutes to seconds. Microsoft and Infinera cite that they have gone from tens of minutes down to a few seconds using the more advanced optical infrastructure. "They [Microsoft] are very happy with it," says Tomkos.
But to provision new capacity at the optical layer, operators are talking about requirements in the tens of minutes; something they do not expect will change in the coming years. "Cloud services could speed up this timeframe," says Tomkos.
"There is usually a big lag between what operators and vendors do and what academics do," says Tomkos. "But for the topic of flexible, elastic networking, the lag between academics and the vendors has become very small."
Further reading:
OFC/NFOEC 2012 industry reflections - Part 1
The recent OFC/NFOEC show, held in Los Angeles, had a strong vendor presence. Gazettabyte spoke with Infinera's Dave Welch, chief strategy officer and executive vice president, about his impressions of the show, capacity challenges facing the industry, and the importance of the company's photonic integrated circuit technology in light of recent competitor announcements.
OFC/NFOEC reflections: Part 1

"I need as much fibre capacity as I can get, but I also need reach"
Dave Welch, Infinera
Dave Welch values shows such as OFC/NFOEC: "I view the show's benefit as everyone getting together in one place and hearing the same chatter." This helps identify areas of consensus and subjects where there is less agreement.
And while there were no significant surprises at the show, it did highlight several shifts in how the network is evolving, he says.
"The first [shift] is the realisation that the layers are going to physically converge; the architectural layers may still exist but they are going to sit within a box as opposed to multiple boxes," says Welch.
The implementation of this started with the convergence of the Optical Transport Network (OTN) and dense wavelength division multiplexing (DWDM) layers, and the efficiencies that brings to the network.
That is a big deal, says Welch.
Optical designers have long been making transponders for optical transport. But now the transponder isn't an element in the integrated OTN-DWDM layer, rather it is the transceiver. "Even that subtlety means quite a bit," say Welch. "It means that my metrics are no longer 'gray optics in, long-haul optics out', it is 'switch-fabric to fibre'."
Infinera has its own OTN-DWDM platform convergence with the DTN-X platform, and the trend was reaffirmed at the show by the likes of Huawei and Ciena, says Welch: "Everyone is talking about that integration."
The second layer integration stage involves multi-protocol label switching (MPLS). Instead of transponder point-to-point technology, what is being considered is a common platform with an optical management layer, an OTN layer and, in future, an MPLS layer.
"The drive for that box is that you can't continue to scale the network in terms of bandwidth, power and cost by taking each layer as a silo and reducing it down," says Welch. "You have to gain benefits across silos for the scaling to keep up with bandwidth and economic demands."
Super-channels
Optical transport has always been about increasing the data rates carried over wavelengths. At 100 Gigabit-per-second (Gbps), however, companies now use one or two wavelengths - carriers - onto which data is encoded. As vendors look to the next generation of line-side optical transport, what follows 100Gbps, the use of multiple carriers - super-channels - will continue and this was another show trend.
Infinera's technology uses a 500Gbps super-channel based on dual polarisation, quadrature phase-shift keying (DP-QPSK). The company's transmit and receive photonic integrated circuit pair comprise 10 wavelengths (two 50Gbps carriers per 50GHz band).
Ciena and Alcatel-Lucent detailed their next-generation ASICs at OFC. These chips, to appear later this year, include higher-order modulation schemes such as 16-QAM (quadrature amplitude modulation) which can be carried over multiple wavelengths. Going from DP-QPSK to 16-QAM doubles the data rate of a carrier from 100Gbps to 200Gbps, using two carriers each at 16-QAM, enables the two vendors to deliver 400Gbps.
"The concept of this all having to sit on one wavelength is going by the wayside," say Welch.
Capacity challenges
"Over the next five years there are some difficult trends we are going to have to deal with, where there aren't technical solutions," says Welch.
The industry is already talking about fibre capacities of 24 Terabit using coherent technology. Greater capacity is also starting to be traded with reach. "A lot of the higher QAM rate coherent doesn't go very far," says Welch. "16-QAM in true applications is probably a 500km technology."
This is new for the industry. In the past a 10Gbps service could be scaled to 800 Gigabit system using 80 DWDM wavelengths. The same applies to 100Gbps which scales to 8 Terabit.
"I'm used to having high-capacity services and I'm used to having 80 of them, maybe 50 of them," says Welch. "When I get to a Terabit service - not that far out - we haven't come up with a technology that allows the fibre plant to go to 50-100 Terabit."
This issue is already leading to fundamental research looking at techniques to boost the capacity of fibre.
PICs
However, in the shorter term, the smarts to enable high-speed transmission and higher capacity over the fibre are coming from the next-generation DSP-ASICs.
Is Infinera's monolithic integration expertise, with its 500 Gigabit PIC, becoming a less important element of system design?
"PICs have a greater differentiation now than they did then," says Welch.
Unlike Infinera's 500Gbps super-channel, the recently announced ASICs use two carriers and 16-QAM to deliver 400Gbps. But the issue is the reach that can be achieved with 16-QAM: "The difference is 16-QAM doesn't satisfy any long-haul applications," says Welch.
Infinera argues that a fairer comparison with its 500Gbps PIC is dual-carrier QPSK, each carrier at 100Gbps. Once the ASIC and optics deliver 400Gbps using 16-QAM, it is no longer a valid comparison because of reach, he says.
Three parameters must be considered here, says Welch: dollars/Gigabit, reach and fibre capacity. "I have to satisfy all three for my application," he says.
Long-haul operators are extremely sensitive to fibre capacity. "I need as much fibre capacity as I can get," he says. "But I also need reach."
In data centre applications, for example, reach is becoming an issue. "For the data centre there are fewer on and off ramps and I need to ship truly massive amounts of data from one end of the country to the other, or one end of Europe to the other."
The lower reach of 16-QAM is suited to the metro but Welch argues that is one segment that doesn't need the highest capacity but rather lower cost. Here 16-QAM does reduce cost by delivering more bandwidth from the same hardware.
Meanwhile, Infinera is working on its next-generation PIC that will deliver a Terabit super-channel using DP-QPSK, says Welch. The PIC and the accompanying next-generation ASIC will likely appear in the next two years.
Such a 1 Terabit PIC will reduce the cost of optics further but it remains to be seen how Infinera will increase the overall fibre capacity beyond its current 80x100Gbps. The integrated PIC will double the 100Gbps wavelengths that will make up the super-channel, increasing the long-haul line card density and benefiting the dollars/ Gigabit and reach metrics.
In part two, ADVA Optical Networking, Ciena, Cisco Systems and market research firm Ovum reflect on OFC/NFOEC. Click here
OFC/NFOEC 2012: Technical paper highlights
Source: The Optical Society
Novel technologies, operators' experiences with state-of-the-art optical deployments and technical papers on topics such as next-generation PON and 400 Gigabit and 1 Terabit optical transmission are some of the highlights of the upcoming OFC/NFOEC conference and exhibition, to be held in Los Angeles from March 4-8, 2012. Here is a taste of some of the technical paper highlights.
Optical networking
In Spectrum, Cost and Energy Efficiency in Fixed-Grid and Flew-Grid Networks (Paper number 1248601) an evaluation of single and multi-carrier networks at rates up to 400 Gigabit-per-second (Gbps) is made by the Athens Information Technology Center. One finding is that efficient spectrum utilisation and fine bit-rate granularity are essential if cost and energy efficiencies are to be realised.
In several invited papers, operators report their experiences with the latest networking technologies. AT&T Labs discusses advanced ROADM networks; NTT details the digital signal processing (DSP) aspects of 100Gbps DWDM systems and, in a separate paper, the challenge for Optical Transport Network (OTN) at 400Gbps and beyond, while Verizon gives an update on the status of MPLS-TP. As part of the invited papers, Finisar's Chris Cole outlines the next-generation CFP modules.
Optical access
Fabrice Bourgart of FT-Orange Labs details where the next generation PON standards - NGPON2 - are going while NeoPhotonics's David Piehler outlines the state of photonic integrated circuit (PIC) technologies for PONS. This is also a topic tackled by Oclaro's Michael Wale: PICs for next-generation optical access systems. Meanwhile Ao Zhang of Fiberhome Telecommunication Technologies discusses the state of FTTH deployments in the world's biggest market, China.
Switching, filtering and interconnect optical devices
NTT has a paper that details a flexible format modulator using a hybrid design based on a planar lightwave circuit (PLC) and lithium niobate. In a separate paper, NTT discusses silica-based PLC transponder aggregators for a colourless, directionless and contentionless ROADM, while Nistica's Tom Strasser discusses gridless ROADMs. Compact thin-film polymer modulators for telecoms is a subject tackled by GigOptix's Raluca Dinu.
One novel paper is on graphene-based optical modulators by Ming Liu, Xiang at the UC Berkeley (Paper Number: 1249064). The optical loss of graphene can be tuned by shifting its Fermi level, he says. The paper shows that such tuning can be used for a high-speed optical modulator at telecom wavelengths.
Optoelectronic Devices
CMOS photonic integrated circuits is the topic discussed by MIT's Rajeev Ram, who outlines a system-on-chip with photonic input and output. Applications range from multiprocessor interconnects to coherent communications (Paper Number: 1249068).
A polarisation-diversity coherent receiver on polymer PLC for QPSK and QAM signals is presented by Thomas Richter of the Fraunhofer Institute for Telecommunications (Paper Number: 1249427). The device has been tested in systems using 16-QAM and QPSK modulation up to 112 Gbps.
Core network
Ciena's Maurice O'Sullivan outlines 400Gbps/ 1Tbps high-spectral efficiency technology and some of the enabling subsystems. Alcatel-Lucent's Steven Korotky discusses traffic trends: drivers and measures of cost-effective and energy-efficient technologies and architectures for the optical backbone networks, while transport requirements for next-generation heterogeneous networks is the subject tackled by Bruce Nelson of Juniper Networks.
Data centre
IBM's Casimir DeCusatis presents a future - 2015-and-beyond - view of data centre optical networking. The data centre is also tackled by HP's Moray McLaren, in his paper on future computing architectures enabled by optical and nanophotonic interconnects. Optically-interconnected data centres are also discussed by Lei Xu of NEC Labs America.
Expanding usable capacity of fibre syposium
There is a special symposium at OFC/ NFOEC entitled Enabling Technologies for Fiber Capacities Beyond 100 Terabits/second. The papers in the symposium discuss MIMO and OFDM, technologies more commonly encountered in the wireless world.
Rational and innovative times: JDSU's CTO Q&A Part II

"What happens after 100 Gig is going to be very interesting"
Brandon Collings (right), JDSU
How has JDS Uniphase (JDSU) adapted its R&D following the changes in the optical component industry over the last decade?
JDSU has been a public company for both periods [the optical boom of 1999-2000 and now]. The challenge JDSU faced in those times, when there was a lot of venture capital (VC) money flowing into the system, was that the money was sort of free money for these companies. It created an imbalance in that the money was not tied to revenue which was a challenge for companies like JDSU that ties R&D spend to revenue. You also have much more flexibility [as a start-up] in setting different price points if you are operating on VC terms.
The situation now is very straightforward, rational and predictable.
There is not a huge army of R&D going on. That lack of R&D does not speed up the industry but what it does do is allow those companies doing R&D - and there is still a significant number - a lot of focus and clarity. It also requires a lot of partnership between us, our customers [equipment makers] and operators. The people above us can't just sit back and pick and choose what they like today from myriad start-ups doing all sorts of crazy things.
We very much appreciate this rational time. Visions can be more easily discussed, things are more predictable and everyone is playing from a similar set of rules.
Given the changes at the research labs of system vendor and operators, is there a risk that insufficient R&D is being done, impeding optical networking's progress?
It is hard to say absolutely not as less people doing things can slow things down. But the work those labs did, covered a wide space including outside of telecom.
There is still a sufficient critical mass of research at placed like Alcatel-Lucent Bell Labs, AT&T and BT; there is increasingly work going on in new regions like Asia Pacific, and a lot more in and across Europe. It is also much more focussed - the volume of workers may have decreased but the task still remains in hand.
"There are now design tradeoffs [at speeds higher than 100Gbps] whereas before we went faster for the same distance"
How does JDSU foster innovation and ensure it is focussing on the right areas?
I can't say that we have at JDSU a process that ensures innovation. Innovation is fleeting and mysterious.
We stay very connected to our key customers who are more on the cutting edge. We have very good personal and professional relationships with their key people. We have the same type of relationship with the operators.
I and my team very regularly canvass and have open discussions about what is coming. What does JDSU see? What do you see? What technologies are blossoming? We talk through those sort of things.
That isn't where innovation comes from. But what that can do is sow the seeds for the opportunity for innovation to happen.
We take that information and cycle it through all our technology teams. The guys in the trenches - the material scientists, the free-space optics design guys - we try to educate them with as much of an understanding of the higher-level problems that ultimately their products, or the products they design into, will address.
What we find is that these guys are pretty smart. If you arm them with a wider understanding, you get a much more succinct and powerful innovation than if you try to dictate to a material scientist here is what we need, come back when you are done.
It is a loose approach, there isn't a process, but we have found that the more we educate our keys [key guys] to the wider set of problems and the wider scope of their product segments, the more they understand and the more they can connect their sphere of influence from a technology point of view to a new capability. We grab that and run with it when it makes sense.
It is all about communicating with our customers and understanding the environment and the problem, then spreading that as wide as we can so that the opportunity for innovation is always there. We then nurse it back into our customers.
Turning to technology, you recently announced the integration of a tunable laser into an SFP+, a product you expect to ship in a year. What platforms will want a tunable laser in this smallest pluggable form factor?
The XFP has been on routers and OTN (Optical Transport Network) boxes - anything that has 10 Gig - and those interfaces have been migrated over to SFP+ for compactness and face plate space. There are already packet and OTN devices that use SFP+, and DWDM formats of the SFP+, to do backhaul and metro ring application. The expectation is that while there are more XFP ports today, the next round of equipment will move to SFP+.
Certainly the Ciscos, Junipers and the packet guys are using tunable XFPs in great volume for IP over DWDM and access networks, but the more telecom-centric players riding OTN links or maybe native Ethernet links over their metro rings are probably the larger volume.
What distance can the tunable SFP+ achieve?
The distances will be pretty much the same as the tunable XFP. We produce that in a number of flavours, whether it is metro and long-haul. The initial SFP+ will like be the metro reaches, 80km and things like that.
What is the upper limit of the tunable XFP?
We produce a negative chirp version which can do 80km of uncompensated dispersion, and then we produce a zero chirp which is more indicative of long-haul devices.
In that case the upper limit is more defined by the link engineering and the optical signal-to-noise ratio (OSNR), the extent of the dispersion compensation accuracy and the fibre type. It starts to look and smell like a long-haul lithium niobate transceiver where the distances are limited by link design as much as by the transceiver itself. As for the upper limit, you can push 1000km.
An XFP module can accommodate 3.5W while an SFP+ is about 1.5W. How have you reduced the power to fit the design into an SFP+?
It may be a generation before we get to that MSA level so we are working with our customers to see what level they can tolerate. We'll have to hit a lot less that 3.5W but it is not clear that we have to hit the SFP+ MSA specification. We are already closer now to 1.5W than 3.5W.
"I can't say that we have at JDSU a process that ensures innovation. Innovation is fleeting and mysterious."
Semiconductors now play a key role in high-speed optical transmission. Will semiconductors take over more roles and become a bigger part of what you do?
Coherent transmission [that uses an ASIC incorporating a digital signal processor (DSP)] is not going away. There is a lot of differentiation at the moment in what happens in that DSP, but I think overall it is going to be a tool the system houses use to get the job done.
If you look at 10 Gig, the big advancement there was FEC [forward error correction] and advanced FEC. In 2003 the situation was a lot like it is today: who has the best FEC was something that was touted.
If you look at coherent technology, it is certainly a different animal but it is a similar situation: that is, the big enabler for 40 and 100 Gig. Coherent is advanced technology, enhanced FEC was advanced technology back then, and over time it turned into a standardised, commoditised piece that is central and ubiquitously used for network links.
Coherent has more diversity in what it can do but you'll see some convergence and commoditisation of the technology. It is not going to replace or overtake the importance of photonics. In my mind they play together intimately; you can't replace the functions of photonics with electronics any time soon.
From a JDSU perspective, we have a lot of work to do because the bulk of the cost, the power and the size is still in the photonics components. The ASIC will come down in power, it will follow Moore's Law, but we will still need to work on all that photonics stuff because it is a significant portion of the power consumption and it is still the highest portion of the cost.
JDSU has made acquisitions in the area of parallel optics. Given there is now more industry activity here, why isn't JDSU more involved in this area?
We have been intermittently active in the parallel optics market.
The reality is that it is a fairly fragmented market: there are a lot of applications, each one with its own requirements and customer base. It is tough to spread one platform product around these applications. That said, parallel optics is now a mainstay for 40 and 100 Gig client [interfaces] and we are extremely active in that area: the 4x10, 4x25 and 12x10G [interfaces]. So that other parallel optics capability is finding its way into the telecom transceivers.
We do stay active in the interconnect space but we are more selective in what we get engaged in. Some of the rules there are very different: the critical characteristics for chip interconnect are very different to transceivers, for example. It may be much better to have on-chip optics versus off-chip optics. Obviously that drives completely different technologies so it is a much more cloudy, fragmented space at the moment.
We are very tied into it and are looking for those proper opportunities where we do have the technologies to fit into the application.
How does JDSU view the issues of 200, 400 Gigs and 1 Terabit optical transmission?
What happens after 100 Gig is going to be very interesting.
Several things have happened. We have used up the 50GHz [DWDM] channel, we can't go faster in the 50GHz channel - that is the first barrier we are bumping into.
Second, we're finding there is a challenge to do electronics well beyond 40 Gigabit. You start to get into electronics that have to operate at much higher rates - analogue-to-digital converters, modulator drivers - you get into a whole different class of devices.
Third, we have used all of our tools: we have used FEC, we are using soft-decision FEC and coherent detection. We are bumping into the OSNR problem and we don't have any more tools to run lines rates that have less power to noise yet somehow recover that with some magic technology like FEC at 10 Gig, and soft decision FEC and coherent at 40 and 100 Gig.
This is driving us into a new space where we have to do multi-carrier and bigger channels. It is opening up a lot of flexibility because, well, how wide is that channel? How many carriers do you use? What type of modulation format do you use?
What format you use may dictate the distance you go and inversely the width of the channel. We have all these new knobs to play with and they are all tradeoffs: distance versus spectral efficiency in the C-band. The number of carriers will drive potentially the cost because you have to build parallel devices. There are now design tradeoffs whereas before we went faster for the same distance.
We will be seeing a lot of devices and approaches from us and our customers that provide those tradeoffs flexibly so the carriers can do the best they can with what mother nature will allow at this point.
That means transponders that do four carriers: two of them do 200 Gig nicely packed together but they only achieve a few hundred kilometers, but a couple of other carriers right next door go a lot further but they are a little bit wider so that density versus reach tradeoff is in play. That is what is going to be necessary to get the best of what we can do with the technology.
That is the transmission side, the transport side - the ROADMS and amplifiers - they have to accommodate this quirky new formats and reach requirements.
We need to get amplifiers to get the noise down. So this is introducing new concepts like Raman and flex[ible] spectrum to get the best we can do with these really challenging requirements like trying to get the most reach with the greatest spectral efficiency.
How do you keep abreast of all these subject areas besides conversations with customers?
It is a challenge, there aren't many companies in this space that are broader than JDSU's optical comms portfolio.
We do have a team and the team has its area of focus, whether it is ROADMs, modulators, transmission gear or optical amplifiers. We segment it that way but it is a loose segmentation so we don't lose ideas crossing boundaries. We try to deal with the breadth that way.
Beyond that, it is about staying connected with the right people at the customer level, having personal relationships so that you can have open discussions.
And then it is knowing your own organisation, knowing who to pull into a nebulous situation that can engage the customer, think on their feet and whiteboard there and then rather than [bringing in] intelligent people that tend to require more of a recipe to do what they are doing.
It is all about how to get the most from each team member and creating those situations where the right things can happen.
For Part I of the Q&A, click here
