• Technical Conference:  05 – 10 May 2024
  • The CLEO Hub: 07 – 09 May 2024

Time Lens 2.0

By James Van Howe


This post originally appeared on Jim’s Cleo Blog and is reproduced with permission from its author.

Brian Kolner and Moshe Nazarathy coined the word “time-lens” in 1989 after using one to compress a pulse. They made a system in the time-domain that was a complete analog to a lens system in space. Their time-lens took a fat pulse and “focused” it, just like a spatial lens could take a fat beam and focus it to a smaller size. For more details, see Kolner’s well-written 1994 review on space-time duality and van Howe and Xu’s 2006 review on temporal-imaging devices).

Because much of my thesis work focused (pun intended) on temporal-imaging devices, I can’t help seeing them everywhere. This year’s CLEO conference was no exception with some talks being more direct about it than others.

Takahide Sakamoto from the National Institute of Information and Communication, in Tokoyo, Japan discussed time-lenses without using the word itself in tutorial, CMBB1, in “Optical Comb and Pulse Generation from CW Light.” Sakamoto showed impressive work on comb synthesis from CW light using electro-optic (EO) modulation. He demonstrated that EO phase modulation provides the most efficient way to move from CW light to the picosecond bandwidth regime. Higher order nonlinearities like chi-3 from fiber (EO is chi-2 process) can then be used to move bandwidth to femtosecond regime. Sakamoto stressed a clever biasing and driving technique using an itensity modulator that allowed truly flat comb spectra.

Other work leveraging temporal imaging concepts were CMD1, “Tunable high-energy soliton pulse generation from a large-mode-area fiber pumped by a picosecond time-lens source,” from Chris Xu’s group at Cornell University and JTuI77, “Scalable 1.28-Tb/s Transmultiplexer Using a Time Lens” by Petrillo and Foster. The former used electro-optic modulation as the time-lens to generate a seed source from CW light for solition shifting. The latter used four-wave mixing as the time-lens mechanism in order to look at the Fourier transform of a data packet for high-speed time-division multiplexing to wavelength-division multiplexing conversion (just as a spatial lens can provide a Fourier transform of a spatial profile, a time-lens can give the power spectrum of a temporal profile). Note that the Xu group has also developed time-lens source for CARS microscopy.

Work from Andrew Weiner’s group also made use of time-lenes, CWN3, “Broadband, Spectrally Flat Frequency Combs and Short Pulse Sources from Phase modulated CW: Bandwidth Scaling and Flatness Enhancement using Cascaded FWM” and CFG6, “Microwave Photonic Filters with > 65-dB Sidelobe Suppression Using Directly Generated Broadband, Quasi–Gaussian Shaped Optical Frequency Combs.” These works used a front end similar to those shown by Sakamoto, but then added an assisted nonlinear enhancement to bandwidth by using four-wave mixing.

Finally, former CLEO Blogger, Kesnia Dolgaleva, authored CThHH6, “Integrated Temporal Fourier Transformer Based on Chirped Bragg Grating Waveguides” to show a compact, integrated Fourier Transformer, which though not a time-lens, is another device similarly based on space-time duality. This paper draws upon co-author Jose Azana’s previous fiber Bragg grating work, which is just one of many Azana’s contributions to the field of temporal imaging.

If you look hard enough, you can see time-lenses anywhere- all you need is a device that gives a quadratic phase in time to an optical wavefront (nonlinear frequency mixing, used everywhere in optics, is one technique that works well). However, the big advantage for recognizing a time-lens when you have one is that you can bring all of the knowledge of spatial imaging systems to your work with a simple change of variables.

For the original post, click here.

Posted: 9 May 2011 by James Van Howe | with 0 comments

Comments
Blog post currently doesn't have any comments.
 Security code