Previous | Next --- Slide 10 of 55
Back to Lecture Thumbnails
swkonz

One thing I was thinking about was the compile time for DSP when developing for FPGAs. From what I understand, "compiling" into a layout for an FPGA can take MUCH longer than compiling for traditional CPUs, which I imagine is a big bonus for traditional CPUs.

pslui88

For a real-world example of how long this takes, I took EE 180 (Digital Systems Architecture) and we wrote a 5-stage pipelined processor in Verilog, a hardware description language, and at the end, it took 30min-1hr to "synthesize" it. Logic synthesis is the process by which the Verilog code converted to a binary file called a bitstream. The bitstream was finally used to program an FPGA! We also wrote a Sobel edge detector in Verilog and used the same process to synthesize it and have it run on the FPGA. If you think these are cool labs (which they are!), I recommend EE 180. Christos is also super nice!

l-henken

@swkonz The tradeoffs between specialized hardware and traditional CPUs are plentiful, but if you need performance then sometimes the tradeoffs are trivial because CPUs just won't do. That is why research into FPGA workflow (programming languages like Spatial, simulation software, compilation tools etc) is really exciting and important. Once you work with Vivado or similar tools you start to see the need for a better tool chain... That said, I strongly echo the plug for EE180.

chii

Sometimes devices have both CPUs and ASICs to leverage their respective advantages.

arkhan

Where are FPGAs deployed? I can see how they'd be useful for hardware development/experimentation before finalizing a design into an ASIC, or for implementing hardware acceleration where you won't build enough units to justify the costs of an ASIC. But are they used elsewhere?

ajayram

@arkhan one use case we briefly touched on this lecture was using FPGAs in data centers. If a service required some hardware acceleration, the cloud provider can't provision a custom ASIC for the job, but can provision an FPGA which can be re-used depending on what program the host is running.

felixw17

I wonder how these various hardware approaches to computing were historically created... For instance, even though I'm much more familiar with programming on a CPU as opposed to a GPU, it almost seems to me that the concept of a GPU is much simpler than a CPU (after all, a GPU is really good at supporting parallel computation, whereas a CPU seems much more complex because it has to support all different kinds of computation).

icebear101

I once participated in a research project of utilizing FPGA to gain performance improvement. My feeling is like FPGA is suitable for fixed logic, like the decoding process in the project. Also such improvement may be more common in big companies because once the code is proven to be effective (even though with only a few percentage), it can be widely applied.

Please log in to leave a comment.