• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary

worldhits
r4wp5907
r3wp58701
total:64608

results window for this page: [start: 60501 end: 60600]

world-name: r3wp

Group: !REBOL3 Proposals ... For discussion of feature proposals [web-public]
BrianH:
20-Jan-2011
I didn't even know there was a newline bit, though it seems obvious 
in retrospect. It would be lost in the transfer of the value to the 
stack frame for the function call I bet, because stack frames and 
value slots of contexts don't keep track of newlines. You'd have 
to pass in a reference to the block the value is stored in, I'm guessing.
BrianH:
20-Jan-2011
I used to think that newlines in blocks were hidden values in the 
block itself. It makes sense to just have it be a bit in a value 
slot.
Andreas:
20-Jan-2011
It is a value bit.
Andreas:
20-Jan-2011
>> a: copy [1 2 3]
== [1 2 3]

>> new-line next a on     
== [
    2 3
]

>> b: copy []             
== []

>> append b first a   
== [1]

>> append b second a  
== [1
    2
]

>> append b third a   
== [1
    2 3
]
Ladislav:
20-Jan-2011
Well, it is not ideal this way, for two reasons:


- it does not affect all possible line break positions in a block 
(there are n + 1 such positions in a block of length n, as can be 
easily figured)
- it "pollutes" the values, affecting their identity, e.g.
BrianH:
20-Jan-2011
It also makes the "immediate values aren't modifiable" argument a 
little more iffy.
Ladislav:
20-Jan-2011
{It also makes the "immediate values aren't modifiable" argument 
a little more iffy.} - I do not use the notion of "immediate values", 
neverheless, it does not, I would say
BrianH:
20-Jan-2011
Immediate values are a bit iffy concept in REBOL anyways, so it's 
probably for the best that you don't use the notion. Nonetheless, 
there is a documentary typeset in R3 called immediate!, which includes 
the values that are fully stored within a value slot, rather than 
referencing memory elsewhere.
Maxim:
27-Jan-2011
proposal for four very usefull series functions....

shorter?: func [a [series!] b [series!]][
	lesser? length? a length? b
]

longer?: func [a [series!] b [series!]][
	greater? length? a length? b
]

sortest: func [a [series!] b [series!]] [
	either shorter? a b  [a][b]
]

longest: func [a [series!] b [series!]] [
	either longer? a b  [a][b]
]
Maxim:
27-Jan-2011
they could just be part of a delayed load series module.  there are 
many missing series handling/probing functions in REBOL.  we end 
up writing them over and over.
Maxim:
27-Jan-2011
prefix? and suffix? just return true if a series starts with the 
same items in the same order as the second one.  
the second argument is the prefix to compare

so you can easily do:

unless suffix? file-name %.png [append file-name %.png]
Maxim:
27-Jan-2011
unique returns a new series.
Steeve:
27-Jan-2011
yeah, it's shame that unique make a new serie.
Maxim:
27-Jan-2011
its a shame that all "set" functions make new series.  we could just 
easily copy before calling unique:
new: unique copy serie
Maxim:
27-Jan-2011
though there is a lot of memory overhead for large series, I'd rather 
have a refinement on all sets probably   /only  or something similar, 
to have them work "in-place"


come to think about it, maybe we could make a CC wish for this, seems 
like its been a topic of discussion for years, and pretty much eveyone 
agrees that both versions are usefull.
Maxim:
27-Jan-2011
INTERLEAVE is a very usefull function, using two series and mixing 
them up with many options like counts, skip, duplicates, all cooperating.
Maxim:
27-Jan-2011
anyway, I really think that we should accumulate all of the series 
functions we are using (even the more obscure ones) and make a big 
standard series handler module in R3.  it helps people who are learning 
REBOL since they don't have to figure out as much how to build these 
(somethimes non-obvious to implement) funcs.
Maxim:
27-Jan-2011
though its missing a few refinements to make it really usefull.
Maxim:
27-Jan-2011
I bother a lot  ;-)
Steeve:
27-Jan-2011
But doing a sorting algorithm without any clue about memory alocations; 
is just ... clueless ;-)
Steeve:
27-Jan-2011
several tens of megabytes to just display a simple GUI pisses me 
off
Maxim:
27-Jan-2011
its mostly that liquid caches a lot of things for data, so I'm now 
doing all I can for the processes to hit the GC less and less.
the reason is because it increases performance A LOT.
Maxim:
27-Jan-2011
In the next release of glass, in the TO DO file, I list a few measurse 
to increase performance and decrease ram use by concentrating on 
using as much persistent data than throw away recycling.  but that 
means a complete overhaul of liquid and glob (the rendering engine 
over liquid).
Steeve:
27-Jan-2011
128 bit for a value slot is hurts a lot in some cases
Maxim:
27-Jan-2011
I've done very little actual optimisation in liquid cause it works 
well and the lazyness is very efficient.  the lazyness actually helps 
a lot for GC because a lot of the data is never recomputed, its reused. 
 though because I'm only using one layer, the root frames end up 
composing the entire display a few times.
Maxim:
27-Jan-2011
with 3 layers, I should bring that down a lot.  I'll also be rendering 
on a bitmap directly, enabling clipped region rendering, using only 
a widget's new drawing instead of the whole scene.
Steeve:
27-Jan-2011
a layer = an object/gob
Maxim:
27-Jan-2011
each layer creates a tiny object for every widget, but the size of 
the draw blocks is immense, so it will save memory in the end.  if 
we have 10 objects storing  10000 items 10 times 

vs 10 storing parts of those 10000 objects 10 times the memory saving 
should be significant.
Maxim:
27-Jan-2011
and then I can just render 3 of those at a time, so instead of rendering 
10000 times I can just render 100... in rendering that will help 
too.
Maxim:
27-Jan-2011
steeve, with layers, I split up the rendering into different things. 
  bg/fg/interactive.  and I don't have to render the bg all the time, 
only on resize. 

I'll also be able to render just a single widget directly on a bitmap, 
using AGG clipping to that widget.
BrianH:
28-Jan-2011
See http://issue.cc/r3/1573for remove-duplicates and the reason 
it's not as good an idea as you might think. It turns out that the 
set functions have to allocate a new series to do their work (for 
series larger than trivial size), even if they were changed to modify 
in place. So you can't avoid the allocation; might as well benefit 
from it.
Maxim:
28-Jan-2011
its a good idea, because I can't copy the series, its linked in many 
places.
BrianH:
28-Jan-2011
Look a the proposal in that ticket. It works the same as yours but 
faster.
Maxim:
28-Jan-2011
using refinements also means less functions to remember... there 
are quite a few set functions.  I don't want to have each one duplicated 
as a mezz function.
Maxim:
28-Jan-2011
I know, but everything that's done in the C side will save on speed 
and memory, since the C doesn't have to go through the GC and all 
that.  in tight loops and real-time continual processing, these details 
make a big difference in overall smoothness of the app.
Maxim:
28-Jan-2011
which is why its preferable to do it there anyways... and the fact 
that we only have one function name to remember for the two versions 
is also a big deal for simplicitie's sake.
Maxim:
28-Jan-2011
I just realized that if Carl is using a hash table for some of the 
set functions... does this mean that its subject to the 24 bit hash 
problem we discovered in maps?
Maxim:
28-Jan-2011
/no-copy would be really nice... its been a recurring discussion 
for years by many of us, which proves that its a required feature 
IMHO.
BrianH:
28-Jan-2011
Yup, so it's not a direct correspondance.
Ladislav:
28-Jan-2011
I am pretty sure, that:


1) the set operations in Rebol are in fact GC-safe using the standard 
meaning of the sentence

2) it is always necessary to use auxiliary data, if the wish is to 
do the set operation efficiently

3) nobody pretending to need a modifying version really needs an 
inefficient variant, which does not use any auxiliary data
BrianH:
28-Jan-2011
I think that people who need a modifying version really need it, 
but the rest of us need the non-modifying default :)
BrianH:
28-Jan-2011
The only difference between the DEDUPLICATE code in the ticket and 
a native version is that the auxiliary data could be deleted immediately 
after use instead of at the next GC run.
Maxim:
28-Jan-2011
yes, but the extra data used to build it as a mezz, including the 
stack frames and stuff is prevented.   


I know I'm being picky here.  but we're doing a detailed analysis.. 
 :-)
Ladislav:
28-Jan-2011
The only difference between the DEDUPLICATE code in the ticket and 
a native version is that the auxiliary data could be deleted immediately 
after use instead of at the next GC run.
 - that would be inefficient as well
BrianH:
28-Jan-2011
INSERT, CLEAR and UNIQUE are already native, so the actual time-consuming 
portions are already optimized. The only overhead you would be reducing 
by making DEDUPLICATE native is constant per function call, and freeing 
the memory immediately just takes a little pressure off the GC at 
collection time. You don't get as much benefit as adding /into to 
REDUCE and COMPOSE gave, but it might be worth adding as a /no-copy 
option, or just as useful to add as a library function.
Maxim:
28-Jan-2011
right now the GC is very cumbersome. it waits for it to have 3-5MB 
before working. and it can take a noticeable amount of time to do 
when there is a lot of ram.  I've had it freeze for a second in some 
apps.

everything we can do to prevent memory being scanned by the GC is 
a good thing.
BrianH:
28-Jan-2011
Mark and sweep only scans the referenced data, not the unreferenced 
data, but adding a lot of unreferenced data makes the GC run more 
often.
Ladislav:
28-Jan-2011
right now the GC is very cumbersome. it waits for it to have 3-5MB 
before working. and it can take a noticeable amount of time to do 
when there is a lot of ram.  I've had it freeze for a second in some 
apps.

 - what exactly does the GC have in common with the "Deduplicate issue"?
Ladislav:
28-Jan-2011
This is all just pretending, if, what is needed, is a kind of incremental/generational/whichever 
other GC variant, then no "Deduplicate" can help with that
BrianH:
28-Jan-2011
We don't need DEDUPLICATE to help with the GC. He was suggesting 
that having it be native would help reduce the pressure on the GC 
when used for other reasons instead of a mezzanine version. I don't 
think it will by much.
Maxim:
28-Jan-2011
if we implement deduplicate as a mezz, we are juggling data which 
invariably tampers the GC.  doing this native, helps to prevent the 
GC from working to hard.


the problem is not how long/fast the allocation/deallocation is... 
its the fact that cramming data for the GC to manage, will make the 
GC trigger longer/more often.
Ladislav:
28-Jan-2011
The GC is not a slow approach to the garbage collection. The main 
problem is, that it is "unpredictable", and possibly producing delays, 
when other processing stops. (but that does not mean, that immediate 
collection would be faster)
Ladislav:
28-Jan-2011
the "stop the world" approach is disturbing for user interface, which 
might need a different type of GC...
BrianH:
28-Jan-2011
Also, multitasking could require a different kind of GC. Any thoughts 
on this, Ladislav?
Maxim:
28-Jan-2011
just adding a generational system to the GC would help a lot.  I've 
read that some systems also use reference counting and mark and sweep 
together to provide better performance on data which is highly subject 
to volatility.
Maxim:
28-Jan-2011
though I guess it requires a bit more structured code than rebol 
to properly predict what is expected to be volatile.
Ashley:
28-Jan-2011
re DEDUPLICATE ... it's not just GUI code that would benefit, DB 
code often needs to apply this on the result set. "buffer: unique 
buffer" is a common idiom, which can be problematic with very large 
datasets.
Ladislav:
29-Jan-2011
it's not just GUI code that would benefit, DB code often needs to 
apply this on the result set. 

buffer: unique buffer" is a common idiom, which can be problematic 
with very large datasets" - that is exactly where I was trying to 
explain it was just a superstition - buffer: unique buffer is as 
memory hungry as you can get any Deduplicate is just pretentding 
it does not happen, when, in fact, that is not true
Ashley:
29-Jan-2011
Does DEDUPLICATE *have to* create a new series. How inefficient would 
it be to SORT the series then REMOVE duplicates starting from the 
TAIL. Inefficient as a mezz, but as native?
Ladislav:
29-Jan-2011
Deduplicate does not have to use auxiliary data, if the goal is to 
use an inefficient algorithm. But, in that case, there's no need 
to have it as a native.
Maxim:
29-Jan-2011
the average test is that things done in extensions are at least 10 
times faster, and Carl has shown a few examples which where 30 x 
faster.  really Lad, there is no comparison.
Ladislav:
29-Jan-2011
To find out what is wrong, just write an "in place" version of Deduplicate 
in Rebol, divide the time needed to deduplicate a 300 element series 
by 30, and compare to the algorithm (in Rebol again) allowed to use 
auxiliary data.
Ladislav:
29-Jan-2011
Or, to make it even easier, just use an "in place deduplicate" written 
in Rebol, divide the time to deduplicate a 300 element series by 
30, and compare to the time Unique takes (Unique uses aux data, i.e. 
a more efficient algorithm)
Ladislav:
29-Jan-2011
You shall find out, that the difference made by an inappropriate 
algorithm is so huge, that even as a native it would be too slow 
compared to an efficient algorithm written in Rebol
Oldes:
30-Jan-2011
You talk about is so much that someone could write an extension in 
the same time and give a real prove:) What I can say, using additional 
serie is a big speed enancement. At least it was when I was doing 
colorizer.
Ladislav:
30-Jan-2011
You talk about is so much that someone could write an extension in 
the same time and give a real prove:) What I can say, using additional 
serie is a big speed enancement.

 - actually, it has been proven already, just look at the performance 
 of the UNIQUE, etc. functions
BrianH:
31-Jan-2011
A report like "Words are not case insensitive in extension word blocks" 
would help a lot. Carl has been bug fixing lately, and that includes 
extension bugs.
BrianH:
31-Jan-2011
Please check if an object can be used by extension code to resolve 
the case aliases. I don't really see how they could if the words 
are translated to numbers at command call time, but that might be 
a limit of my imagination.
Maxim:
31-Jan-2011
;-----------------
	;-     swap-values()
	;

 ; given two words, it will swap the values these words reference 
 or contain.
	;-----------------
	swap-values: func [
		'a 'b 
		/local c
	][c: get a set a get b set b  c]


>> a: 5
>> b: 1
>> if a > b [swap-values a b]
>> a
== 1
>> b
== 5


I've been using this to make sure inputs are properly ordered or 
ranges normalized when it matters further down in the code.
Gregg:
16-Feb-2011
I have my own versions of the series comparison funcs Max proposed 
on 27-Jan. My versions of SHORTEST and LONGEST take a block of series 
values, and return the given item by length. I don't use them much, 
but there are times they are nice to have.
Marco:
13-Mar-2011
Every refinement with an optional value should accept also a none! 
(implied ?)
eg.
sum: func [
    "Return the sum of two numbers."
    arg1 [number!] "first number"
    arg2 [number!] "second number"
    /times "multiply the result"
    amount [number! none!] "how many times"
][	; test argument, NOT refinement
    either amount [arg1 + arg2 * amount][arg1 + arg2]
	; or if not amount [amount: 1] arg1 + arg2 * amount
	; or amount: any [amount 1] arg1 + arg2 * amount
]
so it would be possible to do:
summed: sum/times 1 2 (something) ;also if (something) is none

and obviously also:
summed: sum 1 2

instead of:
summed: either (something) [sum/time 1 2 (something)][sum 1 2]
Andreas:
13-Mar-2011
apply :sum [a b (something) c]
Marco:
13-Mar-2011
The question is to have a simple method of calling whatever function 
which accpets an optional argument without rewriting the code that 
calls it evry time.
Marco:
13-Mar-2011
instead of my above
summed: either (something) [sum/time 1 2 (something)][sum 1 2]
I wish I could use
summed: sum/times 1 2 (something) ;also if (something) is none

for every Rebol function and also for my own functions preferably 
without explicitly add none! as a required type.
Andreas:
13-Mar-2011
And how would you pass NONE as a refinement argument value?
Marco:
17-Mar-2011
in the foo function /something is a refinement and not an _optional_ 
refinement. In your my-sum function amount is not a refinment and 
my-sum 1 2 none == 3 is correct. What I am saying is that the extra 
none! adds polymorphism (but i have not investigated that too much 
so i could be mistaken), so you can write: sum 1 2 or sum 1 2 3 or 
sum 1 2 none without checking for none before calling the function.
Kaj:
17-Mar-2011
And you can't write SUM 1 2 without a refinement if you have an extra 
AMOUNT parameter. The number of parameters is fixed in REBOL. The 
way to have optional parameters is to use refinements (or a BLOCK! 
argument)
Group: !REBOL3 Parse ... REBOL3 Parse [web-public]
Steeve:
14-Jan-2011
Brian, yes I will provide a special (already constructed ) rule to 
 parse stream of valid rebol value/word
BrianH:
14-Jan-2011
Would this be a good place for TRANSCODE discussions too?
BrianH:
14-Jan-2011
A couple projects that I would find interesting:
BrianH:
14-Jan-2011
- A recovering loader for REBOL source (using TRANSCODE and a manually 
created stack)

- A dialect compiler for R3 PARSE rules that generates the R2 workaround 
code
BrianH:
14-Jan-2011
Steeve, if that rule is already constructed, it could have use beyond 
an incremental parser. Any chance it could be MIT licensed for inclusion 
in R2/Forward as a TRANSCODE backport?
Steeve:
14-Jan-2011
Well, I alredy have in my mind to use the lexer to perform an incremental 
loader.

So that, we could use it to perform incremental execution of rebol 
code 
-> a code DEBUGER
Steeve:
14-Jan-2011
that was the idea behind areat-tc to begin with, but my design was 
too much  narrow minded.
It ended in a bloated huge script of 45 Kb, with bad readability.
It''s requesting a more moldular approach.
I think I learn a lot after such mistake.
BrianH:
14-Jan-2011
Btw, if there are other bugs in R3's PARSE that need to be discussed 
before being submitted to CureCode, this seems like a good group 
to do so.
shadwolf:
14-Jan-2011
could this be linked with a wiki page in rebolfrance.info and alimented 
with the tips and tricks from here ?
BrianH:
14-Jan-2011
Does anyone know if there is any difference between the REJECT and 
FAIL operations? Was REJECT just added as a FAIL synonym because 
ACCEPT was added?
Steeve:
14-Jan-2011
Btw, i found a bug I think, with INTO
BrianH:
14-Jan-2011
There was a change to INTO in R3, so any bugs in it might be that, 
or related to that. What bug?
BrianH:
14-Jan-2011
The block passed to INTO is (in theory) supposed to be treated the 
same as a top-level rule passed to PARSE.
Steeve:
14-Jan-2011
Yeah but... I'm a little disappointed
BrianH:
14-Jan-2011
The top-level rule is the only rule that isn't considered to be a 
loop, but I guess that counts for INTO as well. All other rules have 
an implicit 1 1 loop around them.
BrianH:
14-Jan-2011
It's like wrapping every statement in a LOOP 1 [...] in the DO dialect.
Steeve:
14-Jan-2011
so far, the syntactical rules for rebol scripts.



https://docs.google.com/document/pub?id=1kUiZxvzKTgpp1vL2f854W-8OfExbwT76QUlDnUHTNeA


(Not produced the transcode rule though)

But it's enough to perform the prefetch of a document and to construct 
the internal representation.
BrianH:
14-Jan-2011
Note: If using the new TO or THRU syntax to recognize something that 
would require a QUOTE to recognize otherwise, use the QUOTE in the 
TO or THRU. If you don't, there are bugs.
BrianH:
14-Jan-2011
>> parse [1] [thru [quote 1]]
== true

>> parse [1] [thru [1]]
== true  ; should be a syntax error

>> parse [1] [thru [1 2]]
== true  ; should be a syntax error

>> parse [1] [thru [1 2 1]]
== true

>> parse [1] [thru [1 2 2]]
== true  ; should be false

>> parse [1] [thru [1 2 1 1]]
== true  ; should be a syntax error

>> parse [1] [thru [quote 2]]
== false
BrianH:
14-Jan-2011
According to the original proposal and what Carl said when he implemented 
it,
>> parse [1] [thru [1 2 1]]
== true
should be a syntax error too.
Ladislav:
14-Jan-2011
(FAIL is a rule that fails, while REJECT makes the rule that contains 
it fail, even if it is WHILE, or ANY)
Steeve:
14-Jan-2011
Btw, when a stack overflow occurs, It's terminating the console without 
sending any warning.
Did you experiment that ?
BrianH:
14-Jan-2011
Which means that stack overflows aren't being checked by PARSE. Write 
a ticket.
60501 / 6460812345...604605[606] 607608...643644645646647