• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary

worldhits
r4wp90
r3wp879
total:969

results window for this page: [start: 501 end: 600]

world-name: r3wp

Group: All ... except covered in other channels [web-public]
Pekr:
6-Feb-2008
OK, but do you think that it is a bad thing to have vector gfx available? 
That way, VID3 UI will be scallable. Carl also said, that there is 
some space for improvement. What Cyphre thinks is, that maybe we 
could get compound rasteriser workind, which could speed things up 
a bit (Flash uses it)
Reichart:
6-Feb-2008
Both XML and REBOL already work on multiple systems, I don't perceive 
much of a speed increase, and now-a-days, Zip works great.  WE did 
not have that as a standard 20 years ago.
Maxim:
9-Jan-2009
for me the key point lies not in the fact that we can already make 
mezz func which simulates the foreign! handling with a function such 
as ASSIMILATE.


the difference lies in the fact that if the native function can load 
invalid data, then it should simply because of the the fantastical 
speed its able to convert string data into rebol literal values of 
a variety of types.


hooking up interpreter driven code within the handling of that will 
slow it down.  using the parse with next refinement works... but 
its nowhere near as fast, for loading, say 300MB of scientific data. 
 which is an actual case I had to deal with.  just doing a replace/all 
on that file took 30Minutes.  assimilate with foreign! handling  
would have taken about 5-10 seconds, and the code would have also 
been easier to write over all.
BrianH:
9-Jan-2009
Sorry if I misrepresented your position Brian, I was trying to be 
humorous. I've just put in a request to tweak TRANSCODE's handling 
of commas. If that request goes through, I will be able to add the 
/else option to LOAD. With a few more tweaks I can increase the speed, 
make more of the process native, etc. This is not preventing REBOL 
from dying - it is just allowing you to provide a defibrillator :)
Maxim:
30-Mar-2009
its also a question of speed.  load is blighthingly fast.
Steeve:
1-Apr-2009
it's easy to simulate with transcode and parse but we need speed 
here
Oldes:
3-Apr-2009
It would be fine to have AsmJIT integrated in REBOL as it could be 
used to speed up blitting as well.
TomBon:
9-May-2009
paul, what robert mean is that a patent is worthless until you have 

enough power to defend it. Unfortunatly I have the same experience,

to expensive and if a real big player like to take it, they will 
get 

it even if it takes years, your money and nerves. there are enough

simple and dirty tricks to dry you out. only speed helps here in 
my
opinion.
Maxim:
18-May-2009
what was a bit unsettling was the speed at which their server called 
me when I hit call now... I think the phone wrang before the page 
refresh occured!
Group: Core ... Discuss core issues [web-public]
Gabriele:
1-Nov-2006
on a small block the speed is almost the same (mine a little bit 
faster)
Maxim:
9-Nov-2006
and if he wants, he can just re-implement them natively, if the speed 
gains are worth it.
Maxim:
9-Nov-2006
the speed is secondary at this stage... what we really need is to 
layout the logic and demonstrate a consistent path to conversion 
for all datatypes.  two or three guiding principles will emerge out 
of the implementation... no need to try and define them too early 
on IMO.
Geomol:
23-Nov-2006
Tough questions! :-) In general I'm for evaluation of what possible 
at the earliest time as possible. This way stack space should be 
kept at a minimum. This works against the post-check method mentioned 
earlier. So we have the old fight between speed and size. We want 
both! (Good speed and minimal size = reduced bloat and small foot-print.) 
Examples are good to explore this!

i: 2
a/(i: i + 1): (i: i * 2 0)


Should evaluate the first paren right away, because it's possible. 
This way the path is reduced to: a/3

Now, a/3: might not make sense yet, depending on wheater a is defined 
or not. But we don't care, because it's a set-word, and here post-check 
rules.

a: [1 2]
a/(a: [3 4] 1)

should give 3 (I think).

a: 1x2
a/(a: [3 4] 1)

should also give 3 then.

The last one:
a: 1x2 a/(a: 3x4 1): (a: 5x6 7)
should go like this:
i) a is set to 1x2
ii) a is set  to 3x4

iii) first paren returns 1, so a/1 is about to be set (a is not desided 
yet, because it's a set-word).
iv) a is set to 5x6 and second paren returns 7.

v) a is about to be set, so it's being looked up (a holds at this 
point 5x6).
so result is, that a ends up holding 7x6.
Geomol:
25-Nov-2006
Ladislav, if a: 1x2 a/(a: 3x4 1): (a: 5x6 7) resulting in 7x2 is 
more than 2 times faster than the post-check method resulting in 
7x6, then that is a very good argument to have it that way. Anyway 
your example is something, that should be avoided (if you're a good 
developer, I guess), so speed is better than a "more natural" result. 
That's my view.
Maxim:
14-Dec-2006
this can limit your memory use a lot! especially if you do a pre 
analysis and count the recurrence of each character in the article 
(so you can keep only those that recur, saving some speed).
Anton:
28-Dec-2006
I want to speed up access to a block of objects (unassociated) for 
a search algorithm.
Should I use LIST! or HASH! ?

It's a growing list of visited objects, and I'm searching it each 
time to see if the currently visited object has already been visited.
Gregg:
29-Dec-2006
If lookup speed is primary, use hash!, if you're doing a lot of inserts 
and removes, use list! There is overhead to has items for lookup, 
but the speedup is enormous on lookups..
Volker:
1-Jan-2007
parse series[any[ record-size skip p: (p: insert  p new-value) :p 
]]
but that  shifts  a lot.
would use
 insert clear series  result

and for speed there is insert/part to avoid all the small temp blocks.
Pekr:
15-Jan-2007
Gabriele - I hope so - otoh - what is the expense of threads? I looked 
into my friends code programming server in Delphi - they used exactly 
that - simply new thread for each connection. I would like to know 
what is the overhead - speed, memory etc. I called it "a cheap trick", 
as my friend could not even imagine, how he would do a multiplexed 
scenario ...
Oldes:
18-Apr-2007
I choosed Anton version which is as fast as the load version (but 
I somehow don't know if it's good just to convert to string and load).. 
and I don't need recursive (which is fine, but slower).. and maxim... 
yes.. just speed (it's a shortcut for forskip anyway)...  on the 
'while version is good that the 'change is automatically skiping 
to the position after change.
Geomol:
18-Apr-2007
I never measured parse speed before, I just did. It can parse this 
19420 byte text:
http://www.fys.ku.dk/~niclasen/nicomdoc/math.txt
into this 56131 byte html:
http://www.fys.ku.dk/~niclasen/nicomdoc/math.html

in 0.3 seconds on my 1.2 GHz G4 Mac. The parse rules are around 1100 
lines of code. Parse is cool! :-) Good job, Carl!
Anton:
18-Apr-2007
Ladislav, maybe Oldes meant "slower to write" ? I guess I was aiming 
for less typing too. Parse always seems to win in speed of operation. 
Yes, CHANGE is becomes terribly bad for large blocks. I must think 
of a better suggestion.
Anton:
18-Apr-2007
And their speed:
Henrik:
19-Apr-2007
terry, about accessing data, there is a speed issue with doing so 
with using indexes with paths, so data/2 is much slower than second 
data or pick data 2. that should solve at least that aspect.
BrianH:
7-May-2007
As long as you don't elaborate on the differences between natives, 
actions and ops on every reference to functions of any of those types, 
then it is cool to use the term "native" to refer to all of those. 
The only things that matter about the difference between native and 
REBOL functions are the speed of natives and the available source 
code of REBOL functions.
btiffin:
12-May-2007
During a Sunanda ML contest, someone came up with a parse solution 
to compacting

...it was fairly blazing in speed, but I got busy after phase one 
of the contest and didn't 
follow as close as I would have liked to.

Check http://www.rebol.org/cgi-bin/cgiwrap/rebol/ml-display-thread.r?m=rmlPCJC
for some
details...watch the entries by Romano, Peter and Christian.
The winning code ended up in

http://www.rebol.org/cgi-bin/cgiwrap/rebol/view-script.r?script=rse-ids.r

It's not the same problem set, but parse seems to be a bit of a magic 
bullet, speed
wise.
Ladislav:
13-May-2007
(and I have got a benchmark there measuring overall speed using a 
couple of algorithms - they are derived from a Byte benchmark published 
in june 1988) - we are quite a bit faster than the C language was 
on an average computer back then :-p)
Pekr:
13-May-2007
Ladislav - so we exchanged dynamic nature of language and sacrificed 
speed by doing the step 20 years back :-)
Geomol:
17-May-2007
It's not actually that, I'm after. In this project I need all the 
speed, I can get, so I do it in C, but it's a lot of time spent. 
I was thinking about a dialect in REBOL, that can be converted to 
C and compiled. That way it should be possible to produce C source 
a lot faster, than I do now.
Sunanda:
18-May-2007
Probably 90% of all rebol code is compilable. 

There may be some speed improvements if code was identified as such, 
eg....
        a: func/compilable [a b] [return add a b]

....could (in effect) inline the (current) assembler code for 'add 
and 'return....So if they change value, this code continues unchanged.

But what would we have saved? One level of lookup. Is it worth it?
Henrik:
23-May-2007
and with greater speed but less accuracy:
>> load form [1 2 [3 4] 5 6]
Dockimbel:
17-Jul-2007
From a fresh REBOL/View 1.3.2.3.1 :

>> system/stats
== 4236013
>> system/stats/recycle
== [1 2054 2054 183 183 1869648]
>> make string! 1'860'000
== ""
>> system/stats
== 6099084
>> system/stats/recycle
== [1 2054 2054 183 183 7088]
>> make string! 10'000
== ""
>> system/stats
== 3210049
>> system/stats/recycle
== [2 6385 4331 543 360 2999280]

Just guessing: 


REBOL triggers a GC when the "ballast" value (the last one in the 
block) reaches 0. Then the GC frees only the values that aren't referenced 
anymore. So GC are predictable if you know exactly how much memory 
is consumed by each evaluated expression. Remember that it very easy 
in REBOL to keep hidden references on values (like functions persistent 
context)...


So that way, it keeps a fast average time for new allocations. (I 
guess also that series! and scalar values are managed with different 
rules).


The above example also shows that REBOL gives memory back to the 
OS, but the conditions for that to happen are not very clear to me. 
The GC probably uses complex strategies. If the GC internal rules 
were exposed, we could optimize the memory usage of your applications, 
and probably the speed too.
james_nak:
15-Aug-2007
Actually I was just wanting to know if I was missing something in 
the way I am checking for a value within an object that is part of 
a block of objects. Nothing really sophisticated and these blocks 
are really small so no need for speed increases. 

To be frank,  I often look at the code you all write and say to myself: 
" Self, how in the world did they think of that?" or "Oh, I didn't 
know you could do that." For example when I first started using Rebol, 
I didn't know about the  "in" word as in "Get in object 'word.." 
so I was always using paths and trying to figure out how one would 
make the path a "variable." (object/mypath, where mypath could be 
some changing value). 
Thanks for your input though.
Anton:
1-Apr-2008
Which is great for high-speed prototyping.
Fork:
1-Apr-2008
Yes, high-speed prototyping does seem to be REBOL's area.  I've tried 
Awk and PHP and Perl and such and thought they were all terrible.
Anton:
1-Apr-2008
Fork,


does has func : "converting the container". This must be a first 
impression only, since each of those creates a function!

(See  ?? has  ?? does  ?? func)    This is a nice gradation of function 
specifications which saves keystrokes (and thus overall script size 
and complexity), by gradually increasing the options. So you only 
ever specify what you need to specify.


Using shorter words will not speed up rebol much, because they are 
converted at load time to symbols (which is just an integer, internally). 
However, the general aim is to shrink the program in order to make 
it more easily understandable. In mathematics, concepts have been 
reduced to single-letter symbols, and can thus be more easily manipulated 
in a single page. (Of course, I don't recommend using single-letter 
symbols in rebol, most of the time.)


You can share functions in an object by putting them in a "sub-object", 
eg.
	enum!: context [
		v: none    ; current value	
		access: context [
			get-enum: func [enum] ...
			set-enum: func [enum new-value] ...
		]
	]

Then:
	my-enum/access/set-enum my-enum 'new-value


The "sub-object" (access) will be shared amongst all enum! instances 
(unless you explicitly clone it when you make your enum! instances). 
(Cloning is done just using MAKE.)

if find [false true] 'false ...   

You will almost never have to do this (using true and false, that 
is). It usually boils down much simpler, you will happily discover.
Graham:
28-Jun-2008
Should one go for speed or readability?
Graham:
28-Jun-2008
If it's a time critical app like a web server I'd go for speed ...
BrianH:
18-Dec-2008
In some cases, yes. In other cases, changing the functions to mezzanines 
has allowed us to get algorithmic speedups. In more critical cases 
(like loops), functions that used to be mezzanine are now native. 
The overall speed of R3 is greater.
[unknown: 5]:
18-Dec-2008
Steeve, I'm finding that copy/part using 'at on /seek is working 
at more speed for me.
[unknown: 5]:
18-Dec-2008
its the speed of the read which will be the most  impact.
BrianH:
25-Jan-2009
Speed is a bigger concern for mezzanines, but there has to be a balance. 
We are doing more advanced tricks in R3 to increase speed and reduce 
memory overhead, but R2 is in bugfix and backport only mode right 
now.
BrianH:
13-Feb-2009
If you want it to speed up, use until.
Steeve:
9-Mar-2009
tried to made nforeach for R3: 
- missing do/next to evaluate functions in the data block

- probably speed optimizations can be made (not probably, certainly)

nforeach: func [
	data [block!] body [block!]
	/local vars
][
	vars: make block! (length? data) / 2
	data: copy data
	forskip data 2 [
		append vars data/1 			;* extract vars

  change/only data to block! data/1 	;* convert vars to block (if needed)

  if word? data/2 [poke data 2 get data/2];* get serie from word (do/next 
  much relevant if available) 
	]	

 vars: bind? first use vars reduce [vars]	;* create a context with 
 vars
	bind head data vars
	while [
		also not tail? second data: head data 
		forskip data 2 [
			poke data 2 skip set data/1 data/2 length? data/1
		]
	] bind/copy body vars
]
BrianH:
24-Mar-2009
Based on its speed characteristics, I would say that the length is 
tracked in the string and just accessed by LENGTH?.
Steeve:
24-Mar-2009
speed vs memory
Oldes:
24-Mar-2009
As I'm reviewing my old code, where I see a lot of rejoins where 
the first arg of block is always binary... what do you think about 
something like:
abin: func[block][append copy first block next block]
where the speed gain is:
>> tm 1000000 [abin [#{00} "a"]]
0:00:01.609
>> tm 1000000 [rejoin [#{00} "a"]]
0:00:02.938
Oldes:
31-Mar-2009
I'm not using it.. I'm not coding in R3 yet. Also when I test 'also 
now, it looks that there is no speed gain.
Anton:
31-Mar-2009
Yes, in R2, functions don't bother to unset their local words on 
function exit. This was done for speed reasons. Most functions don't 
need to unset their locals.
PeterWood:
10-Apr-2009
Romano once wrote "if you need speed, parse is you friend.


Smart fellow Romano, much smarter than me. So I'd never question 
parse's speed.
Group: View ... discuss view related issues [web-public]
[unknown: 10]:
26-Nov-2006
mmmm.... yes I thought by switching from VID to my own faces I could 
reduce memory, but thats not true I discovered. Im currently in the 
range of using 17 MB memory , thats very close to JAVA memory consumption... 
Why must a GUI must eat this amount of memory? .. As 'view communicates 
on top of the OS layer I would not expect it to eat this much.. The 
executable might be 850 K but executing explodes it... even 'altme 
eats "more" then Firefox ... Oke 'view is very flexible..very flexible 
compared to other GUI's..still with all the nested pointers etc must 
it eat this much?...mmmmm..... That brings be actualy to some other 
facts questions... What commands/functions must people NOT use when 
using faces? I mean regarding speed.. I discovered that 'draw functions 
slow down view and also a 'foreach loop stucks waking trough 80 faces... 
Would be nice if Carl could open-up a little bit more in 'tuning 
faces...;-)
Maxim:
26-Nov-2006
newer engine will use only AGG which will compute more stuff on the 
fly... and thus would be more memory efficient... but at what cost 
in speed.  I hope that speed will not suffer, when compared to an 
app which does not use AGG.
Henrik:
6-Dec-2006
cyphre, parsing is native for speed, right?
Maxim:
21-Dec-2006
that depends how much ram he is using... at some point the GC slows 
down soooo much that speed exponentially decreases.
Maxim:
22-Dec-2006
if pieces of the glyphs are the same, then why not just assemble 
them within several draw shapes? and then just draw more than one 
single shape?  the memory tradeoff would be significant, yet the 
speed would likely be almost identical
Jerry:
22-Dec-2006
And even better, I have collected the Chung-Jay code for every Chinese 
character. Chung-Jay can tell us what parts are in a character. I 
plan to use both Chung-Jay code and the pixel-matching method to 
speed up the analysis process.
Anton:
3-Feb-2007
As far as I see it, RT uses objects when speed is needed. They can 
map down to C structures better I think.
Maxim:
4-Feb-2007
again, the speed is no concern as one view refresh is 10000 times 
slower than anything I have ever thrown in the event loop itself.
Janeks:
27-Jun-2008
I needed it for very simple tasks, so it does what I need. But I 
did not yet tested f.ex. speed.
Henrik:
13-May-2009
yes, even this fix is rather expensive. one face for each side instead 
of one, but maybe it will speed up redrawing big faces a bit.
Maxim:
16-May-2009
for now... I need something that just "works".  speed is secondary... 
the actual file copy is quick, its the login at each command which 
takes about a second ...
Group: DevCon2005 ... DevCon 2005 [web-public]
Volker:
30-Sep-2005
maybe speed-demo of rebcode?
Pekr:
3-Oct-2005
so basically your cell-phone in PCMCIA slot, using packet based connection. 
All three mobile operators provide it here. There is not even newer 
standard, called EDGE, which runs upon that and allows speed up to 
150kbit, which is nice for basically cell-phone connection ...
Pekr:
3-Oct-2005
who did demo of rebcode? Any actual examples of where could speed-up 
be applied?
Maarten:
3-Oct-2005
No, Coop is a  way for RT to speed up rebol development with the 
community.
PhilB:
3-Oct-2005
Well, it may not be complete .... or be incompatible in some way 
with existing code .... who knows.


For a speed increase on 30 times one would have thought it would 
have generated more excitment in the community .....
Pekr:
3-Oct-2005
Romano - you talk about it in a strange mood, don't you? Would you 
guys expected rebcode to speed rebol even further?
Pekr:
3-Oct-2005
not sure - the speed of rebol is sometimes really limiting, let's 
just admit it ... simply - try e.g. here to scroll with altme - really 
slow ...
Gabriele:
4-Oct-2005
but... my server has 100Mbps so as soon as i'm finished uploading 
your speed will improve i guess :)
BrianH:
4-Oct-2005
ABC doesn't have spyware. Any torrent client will do, as the speed 
mostly depends on the seeders - the client overhead is negligable. 
I used Azureus, worked fine.
yeksoon:
5-Oct-2005
with regards to QTask speed.., from my experience..it really depends 
on time of day and who logs on.


it gets slows once the QTask developers starts working...but apart 
from that, it can be 'breezy'.

Reichart mentions that there are plans to speed things up. Specific 
details, I will leave it up to him to share.
Group: Tech News ... Interesting technology [web-public]
Reichart:
16-Jun-2009
Of note, about 20 years ago I wrote up a paper to build a camera 
with a 100x100 CCD that could capture huge images by vibrating the 
aperture (which would be small than a standard pin hole).  The speed 
of your CPU would control the time it took, thus faster computers 
= higher ISO values, that simple.


You would also be able to point it at something far away, and tell 
it to focus on that region, thus getting a clear image even at a 
very far distance.


This is still worth building today.  A $10 camera that takes 10Kx10K 
image in about 1 second, not bad.  Through software you could remove 
things that moved as well, for example cars that park over night, 
people walking around, etc.  Over several days you would end up with 
a crystal clear image of anything that was not moving.
Sunanda:
2-Sep-2009
Intertesting idea -- hope it succeeds, and the price drops by an 
order of magnitude!

Max speed is stated as 20KPH -- not a very high speed for a bicycle. 
So hard braking unlikely to be a problem.

Needing to use a backpack (no attachable panniers) will be a drawback 
for commuters / shoppers.
Henrik:
12-Nov-2009
http://blog.chromium.org/2009/11/2x-faster-web.html


Google experiment with a new protocol to speed webserver transfers 
up about 2x from HTTP.
Geomol:
20-Nov-2009
1. "You know that our resources are scarce. There are very few REBOL 
experts and they are all working."


If an expert can't help by delivering C code, which is needed, I 
guess, then it's better, if that expert use his code elsewhere. (See 
e.g. Gabriele's last post in "!REBOL3".)


2. "You know that R3's source model will deliver the much needed 
flexibility in extensions, hosts and open source code."


We still wait to see these things. Do you expect people to wait forever? 
I can understand, many use their REBOL knowledge and try to create 
something similar themselves, because they're tired of waiting. If 
there were alternatives, people didn't have to wait, but could move 
back and forth between languages. That's happening with many other 
languages.


3. "You know that R3 development is moving forward at a steady pace."


And it can continue to do that, even if there were competition. Actually 
competition might speed some things up.


4. "You know there is a clause to put R3 in other people's hands, 
if RT bows under."

No, I didn't know that.


5. "You know that the R3 design proces relies heavily on one single 
reference."


Yes, and that put REBOL developers in what situation? With alternatives 
and competition, how would the situation look? I don't think, it 
needs to be a worse situation than the present one with alternatives.


6. "You know that RT can't work any one bit faster if a different 
developer with similar goals comes in to compete."

No, I didn't know that. Also if the alternative were open source?


7. "You know that dividing REBOL in separate implementations will 
kill one of its main advantages"


So there can be only one? We have R1, R2 and possible R3 in the future. 
R3 seems to be not very backward compatible, when it comes out. What 
if there came an alternative, that was more compatible with R2, than 
what R3 will be? That can't be bad for all our present code written 
in R2.


I'm sorry, if I offended you, I didn't mean to. I like change. And 
I like good design.
Ashley:
4-Dec-2009
Google Public DNS: http://code.google.com/speed/public-dns/
AdrianS:
21-Jan-2010
Graham - use the Nightly Tester Tools extension to override the version 
check on those extensions - I've been using the 3.6 nightlies for 
months and I've never really had any issues with overriding some 
of my extensions (I have a ton). The speed boost in 3.6 is worth 
it.
Sunanda:
29-Mar-2010
NoSQL is for people who need speed rather than acid.


I worked on stuff in the 1980s and 1990s that sacrificed guarantees 
of data consistency for higher rates of throughput. At the time, 
it was the only way to build those systems as we did not have the 
raw machine power needed to run large-scale real time systems.


These days, the same may still be true for some huge write-heavy 
applications. If so, people will do what they can get get the performance. 


But most applications need data consistency more than raw performance.
Graham:
30-Apr-2010
The view is that HP has purchased Palm to get access to WebOS so 
that they didn't have to develop their own.  The own slate compared 
with the iPad is a dog in terms of battery life, speed and  screen 
resolution, and they had long given up on Arm devices.
Maxim:
19-May-2010
thumbs don't have near the same mobility and speed as the other fingers, 
unless you only use rotation of the first knuckle. 


the momet you have to flex the thumb, it becomes slow.   which is 
why we'll naturall hold the phone laterally and browse using thumbs 
sideways... but doing so vertically isn't nearly as ergonomic.
NickA:
12-Jul-2010
I've been waiting to see this trend pick up speed, for years now 
(looks a lot like Scratch): http://www.nytimes.com/2010/07/12/technology/12google.html?scp=1&sq=googleapp inventor&st=cse
AdrianS:
9-Dec-2010
Actually, the VLC player (free) lets you do that, but you have to 
provide it the link to the stream, whereas with MySpeed, embedded 
videos play at a speed controlled by a little tool tray UI
shadwolf:
14-Jan-2011
maybe using wayland instead of x11R6 + compiz or metacity can speed 
up your boot sequence too
GrahamC:
11-Feb-2011

There are other mobile ecosystems. We will disrupt them. There will 
be challenges. We will overcome them. Success requires speed. We 
will be swift. Together, we see the opportunity, and we have the 
will, the resources and the drive to succeed.""
GrahamC:
23-Sep-2011
Experimental results have demonstrated that effects due to entanglement 
travel at least thousands of times faster than the speed of light
Ladislav:
23-Sep-2011
travel at least thousands of times faster than the speed of light
 - except for the fact, that they actually don't "travel"
GrahamC:
15-Oct-2011
http://arxiv.org/abs/1110.2685


neutrinos were not travelling faster than light speed ... the experiment 
did not account for the GPS satellites being in a different referencec 
frame.  They calculated to account for this and found the missing 
32 nanoseconds
Pekr:
10-Jan-2012
Why are you comparing 366MHz machine speed towards the 800MHz one?
Group: !REBOL3-OLD1 ... [web-public]
Oldes:
24-May-2007
Geomol, I have a few quite complex scripts (~300kB) and I'm looking 
forward to rewrite them for R3. If it will speed up, why not, it 
will not be difficult imho. And if you don't want to use new features 
of R3 you can still use R2 as mentioned Brian.
Gabriele:
29-May-2007
this is done for speed and saving memory
ICarii:
1-Jun-2007
nice - any notes on its execution speed vs other richtext controls 
eg MSRichTextBox ?
Gabriele:
4-Jun-2007
it's not like jpeg decoding that has to be done in c for speed.
Pekr:
19-Jul-2007
Henrik - is there some nice demo already? I would eg. would like 
to see particles running on R3. Should not be difficult to port, 
and could show general speed improvement :-) Maybe you could take 
short videos of R2 and R3 versions to compare :-)
Pekr:
30-Jul-2007
yes, I just meant posting the link to docs could speed-up after-release 
phase, as ppl would at least theoretically know what is coming and 
what is particular design meaning of new things. IMO post-docs-one-week-before-the-release 
could be a good thing to consider ....
Pekr:
31-Jul-2007
Cyphre - will there be access to buffer? Or is the OGL the thing, 
which will speed-up blitting, because it uses HW to draw?
Henrik:
24-Aug-2007
Latest report: Nothing big has happened in a couple of days. Carl 
is buried in some work and bugfixing. I'm building the new requester 
system with the new way to parse dialects. 267 bug reports listed. 
Cyphre has talked about speed optimizations that will be made to 
the graphics system. Pekr is talking. A lot. :-) Gabriele is also 
busy coding.


There are many requests on ports for OSX and Linux as this Windows-only 
thing is getting rather old. Geomol has shown interest in the OSX 
port. Brian Tiffin has shown interest in the Linux port. Both, I'm 
sure, could use some help at some point, if anyone is interested. 
:-)
[unknown: 10]:
18-Sep-2007
Q: are there any functional changes to the parse-dialect in R3 or 
is it purely the speed optimization that gets a thoroughly restructuring?
Pekr:
10-Oct-2007
Tao is no more - they went bancrupt. Tao was imo not very special, 
yet similar to REBOL. Remember - they had to code in kind of Rebcode 
ASM, to get the speed. And R3, once platform plug-ins are ready, 
will allow to replace certain parts, e.g. rendering.
btiffin:
15-Oct-2007
Petr; A lot of this comes down to what is going to cost the REBOL 
evaluator.  I don't know, but have a feeling that a lot of intermediate 
results are discarded.  Could be wrong.  But if so, I wouldn't push 
for anything that will slow down current execution speed.  If the 
values are there on a stack today, great.  But II'd guess that only 
the last may easily (and zero cost to current run-time) accessible. 
 And with some fancy expressions, what goes on the stack in what 
order may be optimized differently than reading code left to right. 
 I'll ask while pointing out the interest that has been shown here 
by the group.  If coders want a pickable list of expressions today 
we have reduce and friends.  I'm more aiming to get at the last result 
from the console as I'm always forgetting to put a var: in front 
of test code, especially code tthat returns an object! that I'd like 
to probe.
Oldes:
16-Nov-2007
Instead of ALTER functionality I use this quite a lot... but I'm 
not sure I would use funtion with refinement for this as I use it 
in loops where speed is important.
501 / 96912345[6] 78910