From the NVIDIA CUDA Development quick start guide:

CUDA-enabled GPUs have hundreds of cores that can collectively run thousands of computing threads. Each core has shared resources, including registers and memory. The on-chip shared memory allows parallel tasks running on these cores to share data without sending it over the system memory bus.

Real-time simulations of hundreds of entities immediately come to mind, sounds interesting for my next project in adm.

Check out the guide here,  along with programming and SDK reference:

Leave a Reply

Your email address will not be published. Required fields are marked *