youtube channel for steve oakley
Facebook logo for steve oakley

Wednesday, December 14, 2011

nVidia to Open Source CUDA

What's CUDA you say ? Its the C / C++ / Fortran programming languages that run ( write code, compile ) on nVidia GPU's that let stuff like Adobe's Mecury Playback Engine do their real time magic. Bravo to nVidia for doing such a bold move because this helps everyone. They did this fully knowing that bringing CUDA over to ATI / AMD cpu's is something that has been hurting a lot of users. This is true especially mac users who are often stuck with ATI GPU's and no other options. Of course for this to happen some one will have to do a lot of work in actual compiler writing code to get ATI specific GPU machine instructions out rather then nV ones, but its certainly approachable now. Only good things to come. READ IT HERE

or maybe not... from Anatech

"Finally, with the move to LLVM NVIDIA is also opening up CUDA, if ever so slightly. On a technical level NVIDIA’s CUDA LLVM compiler is a closed fork of LLVM (allowed via LLVM’s BSD-type license), and due to the changes NVIDIA has made it’s not possible to blindly plug in languages and architectures to the compiler. To actually add languages and architectures to CUDA LLVM you need the source code to it, and that’s where CUDA is becoming “open.” NVIDIA will not be releasing CUDA LLVM in a truly open source manner, but they will be releasing the source in a manner akin to Microsoft’s “shared source” initiative – eligible researchers and developers will be able to apply to NVIDIA for access to the source code. This allows NVIDIA to share CUDA LLVM with the necessary parties to expand its functionality without sharing it with everyone and having the inner workings of the Fermi code generator exposed, or having someone (i.e. AMD) add support for a new architecture and hurt NVIDIA’s hardware business in the process. "

To which I'll counter - CUDA has a spec. Some with the resources could basically create a new compiler to output ATI GPU code. OpenCL is a step in the direction to make code that runs on GPU's agnostic to the hardware, but in reality its still very young and probably not the most effcient way of working. Adding abstraction layers speeds up coding, but not app performance.