counter for iweb
Website
Silicon Photonics

Published book, click here

Entries in AI (21)

Thursday
Jun062024

Cloud and AI: Opportunities that must be grabbed

The founder of Cloud Light, Dennis Tong, talks about the company, how its sale to Lumentum came about, and the promise of cloud and AI markets for optics.

For Dennis Tong (pictured), Hong Kong is a unique place that has a perfect blend of the East and West.

Tong, the founder and CEO of optical module specialist Cloud Light, should know. The company is headquartered in Hong Kong and has R&D offices in Hong Kong and Taipei, Taiwan. Cloud Light also has manufacturing sites in Asia: in the Chinese city of Dongguan—two hours by car north of Hong Kong—and in the Philippines.

Now, Cloud Light is part of Lumentum. The U.S. photonics firm bought the optical module maker for $750 million in November 2023.

Click to read more ...

Wednesday
Dec132023

Broadcoms taps AI to improve switch chip traffic analysis

Broadcom's Trident 5-X12 networking chip is the company's first to add an artificial intelligence (AI) inferencing engine.

The latest Trident, Tomahawk, and Jericho devices. Source: Broadcom.

Data centre operators can use their network traffic to train the chip's neural network. The Trident 5's inference engine, dubbed the Networking General-purpose Neural-network Traffic-analyzer or NetGNT, is loaded with the resulting trained model to classify traffic and detect security threats.

"It is the first time we have put a neural network focused on traffic analysis into a chip," says Robin Grindley, principal product line manager with Broadcom's Core Switching Group.

Click to read more ...

Tuesday
Sep122023

Webinar: Scaling AI clusters with optical interconnects

A reminder that this Thursday, September 14th, 8:00-9:00 am PT, I will be taking part in a webcast as part of the OCP Educational Webinar Programme that explores the future of AI computing with optical interconnects.

Data and computation drive AI success, and the hyperscaler are racing to build massive AI accelerator-based compute clusters. The impact of large language models and ChatGPT has turbocharged this race. Scaling demands innovation in accelerator chips, node linkages, fabrics, and topology.

For this webinar, industry experts will discuss the challenge of scaling AI clusters. The other speakers include Cliff Grossner Ph.D.Yang Chen, and Bob Shine. To register, please click here 

Click to read more ...

Sunday
Jul302023

Modelling the Human Brain with specialised CPUs

Part 2: University of Manchester's Professor Steve Furber discusses the design considerations for developing hardware to mimic the workings of the human brain.

The designed hardware, the Arm-based Spiking Neural Network Architecture (SpiNNAker) chip, is being used to understand the working of the brain and for industrial applications to implement artificial intelligence (AI)

Professor Steve Furber

Steve Furber has spent his career researching computing systems but his interests have taken him on a path different to the mainstream.

As principal designer at Acorn Computers, he developed a reduced instruction set computing (RISC) processor architecture when microprocessors used a complex instruction set.

The RISC design became the foundational architecture for the processor design company Arm.

As an academic, Furber explored asynchronous logic when the digital logic of commercial chips was all clock-driven.

He then took a turn towards AI during a period when AI research was in the doldrums.

Click to read more ...

Friday
Jul142023

Using light to connect an AI processor’s cores

Lightelligence is using silicon photonics to connect 64 cores of its AI processor. But the company has bigger ambitions for its optical network-on-chip technology 

Lightelligence has unveiled its optical network-on-chip designed to scale multiprocessor designs.

The start-up’s first product showcasing the technology is the Hummingbird, a system-in-package that combines Lightelligence’s 64-core artificial intelligence (AI) processor and a silicon photonics chip linking the processor’s cores.

Maurice Steinman

A key issue impeding the scaling of computing resources is the ‘memory wall’ which refers to the growing gap between processor and memory speeds, causing processors to be idle as they wait for data to crunch.

Click to read more ...

Wednesday
May172023

Neil McRae: What’s next for the telecom industry

In a talk at the FutureNet World conference, held in London on May 3-4, Neil McRae explains why he is upbeat about the telecoms industry's prospects

Neil McRae at Futurenet World, London earlier this month.

Neil McRae is tasked with giving the final talk of the two-day FutureNet World conference.

"Yeah, I'm on the graveyard shift," he quips.

McRae, the former chief network architect at BT, is now chief network strategist at Juniper Network. 

The talk’s title is "What's Next", McRae's take on the telecom industry and how it can grow.

Click to read more ...

Tuesday
Apr182023

Enfabrica’s chip tackles AI supercomputing challenges

  • Enfabrica’s accelerated compute fabric chip is designed to scale computing clusters comprising CPUs and specialist accelerator chips.
  • The chip uses memory disaggregation and high-bandwidth networking for accelerator-based servers tackling artificial intelligence (AI) tasks.

Rochan Sankar

For over a decade, cloud players have packed their data centres with x86-based CPU servers linked using tiers of Ethernet switches.

“The reason why Ethernet networking has been at the core of the infrastructure is that it is incredibly resilient,” says Rochan Sankar, CEO and co-founder of Enfabrica.

But the rise of AI and machine learning is causing the traditional architecture to change.

What is required is a mix of processors: CPUs and accelerators. Accelerators are specialist processors such as graphics processing units (GPUs), programmable logic (FPGAs), and custom ASICs developed by the hyperscalers.

Click to read more ...