Previous | Next --- Slide 34 of 62
Back to Lecture Thumbnails
suninhouse

The size of the deep network may be very large as there can be millions to billions of parameters, such as in AlexNet which has about 660K units, 61M parameters, and over 600M connections.

This creates an issue for applications on a wide range of devices, such as mobile phone, or drones.

icebear101

^^ Background: Originally the machine learning is applied in cloud. Devices transmit the data to cloud and computation happens in cloud, then the results are transmitted back to devices. However, the latency largely depends on the network conditions. Also, security may become an issue that users may not want their private data uploaded into cloud. Then, computation are moved to devices.

x2020

I came across some "model parallelism" concepts for neural networks, where the NNs are split and stored on different machines. It's another way to reduce the memory and computational load for each machine.

Please log in to leave a comment.