Optical DSP

Published on Dec 17, 2015


What is DSP?

The world of science and engineering is filled with signals: images from remote space probes, voltages generated by the heart and brain, radar and sonar echoes, seismic vibrations, and countless other applications.

Digital Signal Processing is the science of using computers to understand these types of data. This includes a wide variety of goals: filtering, speech recognition, image enhancement, data compression, neural networks, and much more. DSP is one of the most powerful technologies that will shape science and engineering in the twenty-first century.

Digital Signal Processing (DSP) is used in a wide variety of applications, and it is difficult to find a good definition that is general.

We can start by dictionary definitions of the words:

" Digital - operating by the use of discrete signals to represent data in the form of numbers

" Signal - a variable parameter by which information is conveyed through an electronic circuit

" Processing - to perform operations on data according to programmed instructions

This leads us to a simple definition of:

" Digital Signal processing - the changing or analyzing information which is measured as discrete sequences of numbers.

DSP is used in a very wide variety of applications, but most share some common features:

" they use a lot of math's (multiplying and adding signals)

" they deal with signals that come from the real world

" they require a response in a certain time

A digital signal processor (DSP) is a type of microprocessor - one that is extremely fast and powerful. A DSP is unique because it processes data in real time. This real-time capability makes a DSP perfect for applications where we won't tolerate any delays.

General view of ODSP

The optical processors function only in very specific DSP functions. I.e. they are not "general" purpose processors. If this is accurate, they cannot replace "general purpose" command and control functions in satellites, robots, etc. Their primary purpose would be in data integration, filtering, etc. I.e. computational tasks with a high degree of parallel operations rather than serial.

That is not to say that they might not evolve into general purpose computing functions but such functions are not what optical methods are generally best at. It may require the development of an entirely different computing paradigm (how can one multiplex many diverse computing paths?). It sounds like a PERT Chart data/resource-flow compression nightmare and solving that is *not* going to be easy.