?

Log in

No account? Create an account

Previous Entry | Next Entry

I wonder whether any efforts in GPU (read: special purpose) hardware accelerated desktop UI's are worth the effort given this kind of speculation about the future of multi-core general-purpose processors: Quake 4: Raytraced. Behind the link, the "Download" link will take you to a video of Intel people predicting that an 8-way CPU from two years from now will be fully capable of live ray-tracing a modern game. Perhaps this is overly optimistic, however, it is not difficult to imagine that in the mid-term future, 8 or 16-way GP CPU's are going to be coming around to a laptop near you.

As an aside, my work in Beowulf clustering gives me the opportunity to pay a certain elevated level of attention to planned technology changes from Intel and AMD whose parts we use to build the supercomputers. The word we are hearing from them is almost uniformly that they are stuck at the 3.0 GHz barrier and that for the foreseeable future, advancements from them will be in the form of power reductions by shrinking the manufacturing process and increasing the number of cores on-die.

It appears to me that, at some point in the next 2-4 years, the GPU will be well in to the process of being replaced by the grandchild of SSE4 (under some other name, of course) which will contain specialized instructions used to accelerate graphics operations (2D and 3D) on general-purpose processors. AMD and Intel have both announced plans and projects to move their graphics chips on-die. This is still in the "special purpose" vein but, once the competition of third-party expansion cards from the likes of NVidia are crushed, it doesn't seem like much of a leap to speculate that a desire to move away from specialize pipelines for graphics will be afoot.

Will OpenGL and DirectX still be relevant then? I wonder whether a "software-renderer" could achieve the eye candy that people want on the desktop. If it can do that for alien invaders in Quake 4 today, why not for GTK?

Comments

( 6 comments — Leave a comment )
(Anonymous)
Oct. 28th, 2007 08:09 am (UTC)
that would be nice, but...
... it's going to suck power pretty hard. Eye candy is more power efficiently rendered by the GPU than by the CPU, at the moment.
(Anonymous)
Oct. 28th, 2007 09:44 am (UTC)
programming model
Sure, it may be that CPUs will absorb what GPUs can do right now using "specialized instructions" -- but then, using those specialized instructions will probably look pretty similar to what GPU shader programming looks like right now (maybe minus the OpenGL/DirectX management stuff that we use currently to load data). So there is no harm done in investigating that sort of thing.

Why do I think that it will probably look similar? When you compare MMX/SEE and GPU shader programming, it turns out that the former requires you to think far more about how to parallelize instructions, alignment issues, memory access, etc, whereas with the latter you can just provide the code that computes a single pixel and it is automatically parallized. How do GPUs manage that? Well, the fact that their programming model is restricted means that they don't have to take into account all sorts of edge cases and can do more optimization without explicit support (or brain-cycles) by the developer.
(Anonymous)
Oct. 28th, 2007 10:37 am (UTC)
OpenGL is an API. In its beginnings, in 1992, implementations were mostly software only. With time, more and more of the implementation shifted to the hardware. Since then it had to be extended to catch up with the hardware capabilities, with more and more programmability.

Nowadays the software/hardware border is blurring. GPU hardware is becoming as general as CPU, and the OpenGL is becoming a JIT compiler.

Even when the Tera-Scale-like becomes mature, I don't think APIs such as OpenGL will become irrelevant. My guess is that they be as important or even more important than today. Because they will continue provide a stable and widely available interface for such a heterogeneous range of graphics acceleration architectures.
(Anonymous)
Oct. 28th, 2007 06:29 pm (UTC)
re
see: EVAS, EFL.
(Anonymous)
Oct. 28th, 2007 08:14 pm (UTC)
Excellent for Linux !
Excellent ! Let's get read of those complicated 3D beasts with non-free drivers (currently there's no gfx card with current performance with working free drivers).
With those, everything will work out-of-the box without proprietary addons.

That rocks !
(Anonymous)
Nov. 11th, 2007 02:25 pm (UTC)
AMD instructions for stream processing
I just came across a paper at PACT07, see http://portal.acm.org/citation.cfm?id=1299106&jmp=abstract&coll=ACM&dl=ACM&CFID=5912792&CFTOKEN=18817275#abstract

Its about putting stream processing instructions into CPUs and would seem to be what you are getting it. It also highlights nicely of how similar the programming model is when compared to GPUs.
( 6 comments — Leave a comment )

Profile

color, uphair, smile
jasondclinton
Jason D. Clinton

Latest Month

September 2011
S M T W T F S
    123
45678910
11121314151617
18192021222324
252627282930 

Tags

Page Summary

Powered by LiveJournal.com
Designed by Tiffany Chow