[Yt-dev] Quick question about CUDA and GPUs

Matthew Turk matthewturk at gmail.com
Fri Jul 17 10:33:10 PDT 2009


Hi guys,

As an update, on OSX Boost doesn't like linking against a framework
build -- so PyCUDA might not work when you install it.  The library (I
used 1.38.0 but I believe it should be identical with 1.39.0) can be
fixed with:

sudo install_name_tool -change
/System/Library/Frameworks/Python.framework/Versions/2.5/Python
/Library/Frameworks/Python.framework/Versions/Current/Python
libboost_python-xgcc40-mt.dylib

You might have to change the specifics based on which version of gcc,
etc, you use.  Other than this painful step of figuring things out,
boost was trivial to install on OSX.

-Matt

On Fri, Jul 17, 2009 at 11:14 AM, Matthew Turk<matthewturk at gmail.com> wrote:
> Hi Brian,
>
> Well, good question.  It sounds like we are approaching critical mass;
> and certainly, I think we are justified in attempting to learn the
> fastest and best way to approach fast-computation with limited memory
> bandwidth.  To that end, I'm thinking about a couple things --
>
> 1. Separating our arrays into two chunks; fast and slow.  We already
> have a namespace for array generation -- this would suggest that we
> can write an abstraction module that will help us to separate arrays
> that need to be long lived and *fast* from arrays that are okay to be
> slow (only a few operations) and that will be short-lived.
>
> 2. Moving projections and other heavy, already C-based, operations
> onto the GPU.  Or at least, duplicating our procedures in both.  The
> advantage of doing projections on the GPU happen to be that, in
> theory, we should become completely IO limited.  The projections are
> already integer based; furthermore, 32-bit integers gets us
> surprisingly far.  The lightcone, for instance, can be fully addressed
> in GPU-space.
>
> 3.  Ray-tracing and post-processing rad transfer, even optically thin.
>  Right now, field generation can take some time -- but by constructing
> special (for instance) X-ray fields, we can move 100% of the
> computation onto the GPU and speed it up substantially, so that again
> the projection is the dominant portion of the computation, rather than
> the interpolation.
>
> I've spoken with Sam Skillman and a couple other people, and this idea
> seems to get them a bit jazzed up, so perhaps it's something worth
> exploring, particularly as Lincoln is coming online (or already is?)
> and can be used as a deployment and runtime platform.
>
> -Matt
>
> On Thu, Jul 16, 2009 at 10:07 AM, Brian O'Shea<bwoshea at gmail.com> wrote:
>> Hi Matt,
>>
>> 1.  <hand up>
>> 2.  <hand up>
>>
>> Additionally, several nodes of Spur (the viz machine at TACC) have
>> Nvidia GPU boards and NCSA's Tesla system has several as well.  Those
>> ought to be able to use PyCUDA.
>>
>> What's on your mind, Matt?
>>
>> --Brian
>>
>> On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk<matthewturk at gmail.com> wrote:
>>> Hi guys,
>>>
>>> Can I get a show-of-hands --
>>>
>>> 1.  How many of you have access to GPUs that support CUDA?
>>>
>>> 2.  How many of you have installed or CAN install PyCUDA?
>>>
>>> Thanks!
>>>
>>> -Matt
>>> _______________________________________________
>>> Yt-dev mailing list
>>> Yt-dev at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>>>
>> _______________________________________________
>> Yt-dev mailing list
>> Yt-dev at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>>
>



More information about the yt-dev mailing list