Previous | Next --- Slide 61 of 63
Back to Lecture Thumbnails
suninhouse

If programmers are working at a higher abstraction level, then it is more possible to enable continued optimizations underneath without changing the higher level abstraction at all. On the other hand, if the abstraction is at a level that is too distant from implementation, then the performance may become less predictable to the programmers.

mziv

In lecture, Kayvon mentioned that ISPC was able to hit such excellent performance because the developer was willing to make tradeoffs and constraints that the GNU people weren't. Does anyone know what those constraints were? At a first glance, the language seems robust and comprehensive.

nickbowman

@mziv Yeah if anyone is more familiar with what constraints there are in the ISPC programming model, I would love to learn more about the specifics. In the meantime, I found that the "Life of ISPC" blog post series that Kayvon mentioned is super cool and this first post covers some of the origins of ISPC and why exactly the rigid philosophy that Intel was originally taking to solve the problem wasn't working. In particular, it seems like the transition from trying to build better and better compilers to do auto-vectorization of normal C code (which is what Intel was trying to do) to actually changing the programming model (to the whole gangs of many program instances which Matt did with ISPC) would require changing the programming language from C to something C-like. As to what constraints there are of this C-like programming language I wasn't able to glean from the blog posts so would be great to hear from anyone with more expertise in the area!

blipblop

This slide asks us to consider optimizations possible when implementing ISPC for each, vs a higher order map. I tend to think of ISPC for each and map completely equivalently, so I don't know how to answer this. Anybody can think of a optimization possible with ISPC for each, but not with generic map?

Please log in to leave a comment.