r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[!REBOL3]

BrianH
17-Jul-2010
[3773x3]
AGG is not the best place to put OpenCL support. OpenCL is not very 
useful for accelerating graphics, and that is what AGG would need. 
A standalone OpenCL dialect would be more useful. Graphics acceleration 
uses other APIs, like OpenGL or Direct2D. AGG isn't 3D, so using 
OpenGL or Direct3D would be equivalent to reimplementing Direct2D 
on platforms that don't support it.
If the standalone OpenCL dialect was also translatable to CUDA it 
would be even better :)
Or maybe DirectCompute on Win7/Vista.
shadwolf
17-Jul-2010
[3776]
Brianh hum actually when i we display things using agg the cu is 
used only and if we want to do extensive computing visual effects 
like zoom spining things etc ... hten the cpu will be extensively 
used ... and since rebol doesn"t take advantage of  multi CPU  or 
CPU <- to ->GPU communication then the extensive computation loops 
are not enhanced. Problem is now in day NVIDIA is so expensiv no 
one buy it  10 ATi cards are sold for 1 nvidia card and in a very 
near future AMD CPU will be mixed with ATI GPU  into a single chip 
(APU fusion). This is the tendency of the computing area so if you 
have to support 1 paralelle design i would go for the AMD /ATI  couple 
instead of  betting on a close to death horse...
BrianH
17-Jul-2010
[3777]
As for seeing a partial pixel, the writer of AGG demonstrated (in 
an unrelated post) using anti-aliasing to do 1/256 horizontal pixel 
positioning.
shadwolf
17-Jul-2010
[3778]
but yes brianh you got the point when you relay on hardware then 
you have to choose what technology you support i know rebol main 
target is to be hardware / OS / driver abstracted .. but then you 
have a toy language anyone laught about that  can't bring anyway 
the same thing on every OS computer a part some very basic features 
like networking, encryption etc...
BrianH
17-Jul-2010
[3779x2]
The semantics for a GPGPU dialect in REBOL would likely be pretty 
high-level, and could be translatable to different GPUs by using 
different compiler backends. It's not necessarily a good idea to 
bet it all on one horse when you can support them all just by being 
a bit general. We wouldn't have to do major tweaking for a specific 
GPU architecture, since the level of speedup would be great for even 
a half-assed translation.
It's not a toy language, it's a high-level language. The compiler 
would handle the details. That is like calling C a toy language.
shadwolf
17-Jul-2010
[3781]
BrianH probleme is  rebol doesn"t tends to relay on a spécific thing 
or another it's phylosophy is to be an easy way to do easyly easy 
things ... when you want to get your self out of that scope you are 
 facing hella difficulties  why should i code 99% of a project in 
C and then do a  dialect to do the last 1% of rebol action code
BrianH
17-Jul-2010
[3782]
Um, you must not be working on the same things I am. I do tough stuff 
in pure REBOL quite often. The only C I see is there to implement 
low-level dialects used by REBOL, but those aren't as often needed 
as DO or PARSE dialect code.
shadwolf
17-Jul-2010
[3783x2]
that's only usefull if your are sure this extension will be extensively 
used. What interrest me is doing rebol and ways to bring into rebol 
the now in day possibilities .. remember that rebol was designed 
around 1998 at that time processors where mono cores GPU  where a 
joke.(GPU 100MHz with 64Mo do GRAM and CPU 433Mhz SDRAM 133 MHz)
we are 10  years after that design ... can rebol continue to say 
ok the harward evolved but i refuse to use it ?
BrianH
17-Jul-2010
[3785]
One of the things that the modern multi-core language research has 
discovered is that shared-memory multithreading is often a bad idea, 
and that multiprocessing with asynchronous IPC is more reliable and 
scales better. And cooincidently enough, multiprocessing is the method 
REBOL uses. Now all we have to do is get the processes smaller and 
the IPC (/Services) more efficient.
shadwolf
17-Jul-2010
[3786x2]
i think now in day hardware capabilities are introducing alot of 
 problems in software parallelisation strategies (wich had been always 
the case) that's a field i think rebol should explore and propose 
it's originality  to solve that increasing difficulty.
hum yeah but that  solution apears to the rest of the world like 
a joke .. face it ... we are less than a thousand people really caring 
about rebol's futur ...
BrianH
17-Jul-2010
[3788]
The strength that REBOL has is that it is relatively easy to create 
a dialect with different semantics, because we have so many good 
tools to help with the implementation, more all the time. So REBOL 
becomes a good platform on which to do those experiments. And we 
always have the old-school single-process DO dialect to fall back 
on.
shadwolf
17-Jul-2010
[3789]
try to talk asynchronous processing with a guy doing java threads 
programing all day long that's interesting ...
Steeve
17-Jul-2010
[3790]
Yeah, we could probably boost all view stuffs by isolating the rendering 
engine in a distinct process.
shadwolf
17-Jul-2010
[3791]
asynchronous have a weak point the data flow processed should not 
be to much ..  so for example if you want to put cheyenne on his 
needs you make it relay a webradio streaming for example
BrianH
17-Jul-2010
[3792]
Threads was considered to be the solution last decade. And that is 
why we have a multi-core crisis now, because threads are not a good 
solution. That is why the main research not is in active objects 
and green processes.
shadwolf
17-Jul-2010
[3793]
on his needs  = on it's knees ...
BrianH
17-Jul-2010
[3794]
not -> now
shadwolf
17-Jul-2010
[3795x2]
BrianH ok but who promote that way of thinking and multicore crisis 
is mainly do to the shared memory and to the weak memory controller 
completly saturated with date flow from CPU and from GPU
that's why intel/ nvidia  APPLE (in a lower extends all smartphone) 
and AMD/ATI are doing or announcing he merge of the Memory controller 
the CPU and the GPU  into a single unit
BrianH
17-Jul-2010
[3797x2]
Um, that is not the multi-core crisis. The real crisis is that it 
is very difficult to break a program into threads, and even more 
difficult to manage shared state. This is why there are so many issues 
with locking and such.
It's a programming problem, not a hardware problem.
shadwolf
17-Jul-2010
[3799x2]
the A1 chip in the ipad for example already is a allin one chip  
and the preformances are better because the software is better too 
but because the hardware is specificly designed to feat it
that's a thing only a closed disgn 100% controled  like the apple 
one can offer
BrianH
17-Jul-2010
[3801]
Chips in handheld and embedded systems aren't that multicore yet, 
so they can still be programmed in the old ways (like Java).
shadwolf
17-Jul-2010
[3802x3]
brianH that's why before the multi core processor you had multi single 
cores dedicated memory architecture and you still have that design 
in the MEGA ULTRA COMPUTERS
and yes writing programs on those computers means a specific knowledge 
.... Problem is the industry said to the code continu to code the 
way you did so fare the hardware will optimise it
wich obviously isn't the case
BrianH
17-Jul-2010
[3805x3]
Yes, but those mega-utlre computers are just a sign of where things 
are going. On the bitty computers you can still party like it's 1979, 
but on servers you are starting to see cores in the hundreds.
And only on the manyi-core computers is multitasking a real problem 
that needs new language semantics. On the old or bitty systems REBOL-as-it-is 
will do fine.
(I agree, sometimes you have to use the aA button to increase the 
font size.)
shadwolf
17-Jul-2010
[3808x4]
well problem is when you have several chips then you have to design 
alot of bus wich enhance alot the price of the computer imagine those 
computer have over a hundred or a thousand processor
ich individual processor is weak  but all together with a well coded 
software they are blasting
anyway you won't play halo 4 on them so what ever what the people 
buys today are games
game industry is a 90 billon dollars market ...  if rebol can be 
used to solve most ot the coding problems there i would say why not 
?
BrianH
17-Jul-2010
[3812]
I don't design hardware, I design software, or tools to build software. 
And different hardware sometimes demands different semantic constraints 
on the tools to build the software. The multi-core crisis isn't affecting 
hardware as much as it is a crisis of development tools that need 
to write software for that hardware.
shadwolf
17-Jul-2010
[3813x2]
BrianH that's why we need rebol there
but rebol using 100% my CPU  to draw 3 lines on screen i say NO ! 
you see my point
BrianH
17-Jul-2010
[3815]
I like REBOL because it makes it easy to write development tools. 
And that will inevitably include tools for massive multitasking.
shadwolf
17-Jul-2010
[3816]
ok the software can be optimised and R3 and  R2 differencies in software 
design and rendering potential already shown a big improvement
BrianH
17-Jul-2010
[3817]
Tools for graphics too, which others are actively working on now. 
And once AGG is reliably in the host then the whole (qualified portion 
of) the community can work on optimizing it.
shadwolf
17-Jul-2010
[3818]
and that's not even using the grace of my new hardware   i use that 
rebol script on my desktop computer or on netbook the results will 
be the same even if my desktop is hella goliath and my netbook is 
hella the small thing ... So people will say hey that's fantastic 
same animation run anywhere with same results (much or less ) but 
i would say  ... hum no
BrianH
17-Jul-2010
[3819x2]
But GPGPU tools are a separate issue, really, even if they run on 
the same hardware. The workloads are semarate and have different 
semantics.
semarate ->separate
shadwolf
17-Jul-2010
[3821x2]
every time carl share a benchmarking with us on altme i come with 
a benchmark 70% percent (minimum) under this benchmark ... for non 
graphical computing  so this deferency exist in rebol in fact
but  when you enter the graphical area it like a normalised dimension 
where any hardware produce the same rendering