• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary

worldhits
r4wp4382
r3wp44224
total:48606

results window for this page: [start: 45701 end: 45800]

world-name: r3wp

Group: !REBOL3 Proposals ... For discussion of feature proposals [web-public]
shadwolf:
13-Jan-2011
it's not the first time we have this discussion years ago we had 
this discussion about GC and memory management it's a cyclic concern 
and a cyclic discussion until we don't DC it :) ( Document & Collect 
it)
Maxim:
13-Jan-2011
and because memory isn't released to the OS very often, I can make 
a fair guess that the GC doesn't compact on collect.
shadwolf:
13-Jan-2011
and clearly speaking about guru level stuff and not having a begining 
of a trail on such an issue makes me crazy just because noone wants 
to do this basic effort ...
Maxim:
13-Jan-2011
no since the memory allocated by any function still has to be managed 
by some sort of heap, since the data can and often does exist beyond 
the stack.
shadwolf:
13-Jan-2011
maxim hum you know you do a splendid documentation full of stupidities 
to make Carl go on verbose mode and correct it that's what it's called 
preach the false to obtain the true :)
BrianH:
13-Jan-2011
The main reason that we haven't speculated on it is that it doesn't 
really matter what the real algorithm is, since it's internal and 
can't be swapped. All that matters is externally determinable traits, 
like whether data can move and when. Any moving collector requires 
a lot more effort to work with C libraries and extensions; that includes 
copying and generational collectors.
shadwolf:
13-Jan-2011
brianh in fact we had speculate alot of it with steeve and maxim 
around  spring 2009  when we were researching area-tc ... at that 
time i should have done a completly stupid high level documentation 
full of speculations maybe by now we would have better view on this 
particular topic
Maxim:
13-Jan-2011
the extensions API actually marshals all of that so the internals 
of the GC don't really affect the host kit that much.  


A part from the raw image! data, we don't really have any reason 
to believe that any pointer we share with the core is persistent.


In any case I don't rely on it, because AFAIK Carl has insinuated 
that we never really have access to the internals and pointers we 
get are actually interfaces, not interal references (for security 
reasons).
Maxim:
13-Jan-2011
the issue is that the GC does affect my code a lot in big apps and 
I'd like to know exactly when and how it works so I can better guess 
when its about to jerk my code in an animation or while I'm scrolling 
stuff.
Ladislav:
14-Jan-2011
exactly when and how it works
 - there are at least two reasons why this is what you can't get:


1) *when* the GC works is "unpredictable" from the programmer's POV 
(depending on other code, etc.)

2) It is (or may be) subject to adjustments or changes, without the 
programmer being able to detect such changes, so why should he know?

3) programming for a specific GC variant should be seen as a typical 
bad practice - why should you want to make code that is supposed 
to work only in specific circumstances? Not to mention, that you 
actually cannot do that anyway, since the GC should be programmed 
to hide any implementation details
Andreas:
14-Jan-2011
But in various bits and pieces of REBOL documentation it's hinted 
at that the REBOL GC being a tracing mark-sweep variant. And I seem 
to recall that Carl affirmed this at least once, but I don't have 
any link to the source handy.
BrianH:
14-Jan-2011
Alas, that kind of problem is exactly why you don't want to rely 
on the implementation model of the GC. One thing we know about the 
GC is that it is the stop the world type. If we need to solve the 
problem you mention, we would have to completely replace the internal 
memory model with a different one and use a GC with a completely 
different algorithm (generational, incremental, parallel, whatever). 
That means that all of your code optimizations would no longer work. 
If you don't try to optimize your code to the GC, it makes it possible 
to optimize the GC itself.
Maxim:
14-Jan-2011
BrianH, if I deliver an application to a client and he says to my 
face... why does it jerk every 2 seconds?


I really don't care about if the GC might change.  right now I can't 
do anything to help it.


if it changes, I will adapt my code to it again.   This is platform 
tuning and it is inherently "close to the metal", but in the real 
life, such things are usefull to do... 

just look at the 1 second boot linux machine in that other group.
Andreas:
20-Jan-2011
If we had such a function, the discussion around #1830 and related 
issues (such as a more useful map! type) would be much easier.
Andreas:
20-Jan-2011
The proposal above now is to have equal/strict-equal not respect 
binding, and have equiv/strict-equiv be their binding-respecting 
counterparts.
Andreas:
20-Jan-2011
Framed this way, this discussion would be a lot easier, imo; and 
probably more fruitful.
BrianH:
20-Jan-2011
That is a seperate issue that needs its own ticket. FIND and SELECT 
use their own code, so they can only follow the same rules, not use 
the same code.
Andreas:
20-Jan-2011
For reference, also be aware that we have operator shortcuts for 
the comparison functions. At the moment:
=: equal? (and !=
==: strict-equal? (and !==)
=?: same?


The == operators should then probably become shortcuts for strict-equiv.
Andreas:
20-Jan-2011
It's not decimal precision which makes the FIND (and STRICT-MAP!) 
discussion so cumbersome.
BrianH:
20-Jan-2011
We don't need operators for the equiv branch. STRICT-EQUIV? and SAME? 
are the same thing for words and decimals.
BrianH:
20-Jan-2011
Strangely enough, it's not binding or exact decimal comparison that 
are at issue with FIND or strict-map! either, it's case and type. 
Nonetheless, this would make it easier to point to the distinction 
between STRICT-EQUAL? and STRICT-EQUIV? when talking about those, 
precisely because those aren't at issue.
Andreas:
20-Jan-2011
As you see, case and type are the major distinction between equal 
and strict-equal.
Andreas:
20-Jan-2011
So that would make it easy to define FIND as using EQUAL? per default, 
and STRICT-EQUAL? with /case.
BrianH:
20-Jan-2011
We can't rename the /case option - legacy naming rules. And if the 
equivalences are reshuffled this way, we won't have to.
BrianH:
20-Jan-2011
Andreas, the proposal has been added here: http://issue.cc/r3/1834
- if you have any suggestions or tweaks, mention them here or there 
and I'll update it.
BrianH:
20-Jan-2011
I put in a request for Ladislav's feedback in a ticket comment, and 
in other AltME worlds where he works. Including the RMA world, where 
they write code that would be affected by this.
BrianH:
20-Jan-2011
They're a mess here too, but are more useless because they can't 
be searched as easily once they drop off the history, and the relevant 
people who would make the changes aren't generally here that much.. 
CC is a much better place for this kind of thing.
Andreas:
20-Jan-2011
And it's really simple: I wanted Ladislav's feedback here first, 
before we write up a ticket and litter it with useless comments. 
Again, please respect that in the future.
BrianH:
20-Jan-2011
Fortunately it can be edited (and I will do that now).
Andreas:
20-Jan-2011
I think the real question is: we also have strict-equal? which is 
(IIUC) as strict as equiv? for numeric code using decimals _and_ 
is also mapped to operators: == and !==.
Ladislav:
20-Jan-2011
Hmm, I would prefer the current state, then:


- == is what I would use frequently, while I would not want to use 
=? in place of it, because of the difference

- the change would require quite a lot of work, and where is a guarantee, 
that a new idea does not occur in a couple of weeks again?
BrianH:
20-Jan-2011
The long wording is for precision, and because these tickets serve 
as documentation of the issues for future reference.
Andreas:
20-Jan-2011
Ladislav, thanks. Seems the better option then would be to map == 
and !== to strict-equiv? and strict-not-equiv?.
BrianH:
20-Jan-2011
I didn't even know there was a newline bit, though it seems obvious 
in retrospect. It would be lost in the transfer of the value to the 
stack frame for the function call I bet, because stack frames and 
value slots of contexts don't keep track of newlines. You'd have 
to pass in a reference to the block the value is stored in, I'm guessing.
BrianH:
20-Jan-2011
Guess it's in all value slots, not just the ones in block types, 
and just ignored when inapplicable.
BrianH:
21-Jan-2011
Strangely enough, while there is no obvious operator for EQUIV?, 
we could use === for STRICT-EQUIV? and !=== for STRICT-NOT-EQUIV?.
BrianH:
21-Jan-2011
This would solve the operator problem for developers who understand 
the limits of IEEE754, and still let regular developers expect 0.3 
to equal 0.1 + 0.1 + 0.1.
Maxim:
27-Jan-2011
they could just be part of a delayed load series module.  there are 
many missing series handling/probing functions in REBOL.  we end 
up writing them over and over.
Maxim:
27-Jan-2011
prefix? and suffix? just return true if a series starts with the 
same items in the same order as the second one.  
the second argument is the prefix to compare

so you can easily do:

unless suffix? file-name %.png [append file-name %.png]
Rebolek:
27-Jan-2011
so what's the difference between unique and remove-duplicates ?
Maxim:
27-Jan-2011
though there is a lot of memory overhead for large series, I'd rather 
have a refinement on all sets probably   /only  or something similar, 
to have them work "in-place"


come to think about it, maybe we could make a CC wish for this, seems 
like its been a topic of discussion for years, and pretty much eveyone 
agrees that both versions are usefull.
Maxim:
27-Jan-2011
SWAP is another interesting one... takes two series and swaps their 
content, in-place.
Maxim:
27-Jan-2011
INTERLEAVE is a very usefull function, using two series and mixing 
them up with many options like counts, skip, duplicates, all cooperating.
Maxim:
27-Jan-2011
anyway, I really think that we should accumulate all of the series 
functions we are using (even the more obscure ones) and make a big 
standard series handler module in R3.  it helps people who are learning 
REBOL since they don't have to figure out as much how to build these 
(somethimes non-obvious to implement) funcs.
Maxim:
27-Jan-2011
this module could even be an internal delayed load extension... which 
can include some functions in C and others in REBOL mezz within the 
extension's module.
Maxim:
27-Jan-2011
hehe I just discovered that SWAP is already part of R2 and R3   :-)
Maxim:
27-Jan-2011
its mostly that liquid caches a lot of things for data, so I'm now 
doing all I can for the processes to hit the GC less and less.
the reason is because it increases performance A LOT.
Maxim:
27-Jan-2011
In the next release of glass, in the TO DO file, I list a few measurse 
to increase performance and decrease ram use by concentrating on 
using as much persistent data than throw away recycling.  but that 
means a complete overhaul of liquid and glob (the rendering engine 
over liquid).
Maxim:
27-Jan-2011
and this can scale to 400 times faster when ram grows above 400MB 
(the GC is obviously exponentially slower)
Maxim:
27-Jan-2011
I've done very little actual optimisation in liquid cause it works 
well and the lazyness is very efficient.  the lazyness actually helps 
a lot for GC because a lot of the data is never recomputed, its reused. 
 though because I'm only using one layer, the root frames end up 
composing the entire display a few times.
Maxim:
27-Jan-2011
and then I can just render 3 of those at a time, so instead of rendering 
10000 times I can just render 100... in rendering that will help 
too.
Maxim:
27-Jan-2011
steeve, with layers, I split up the rendering into different things. 
  bg/fg/interactive.  and I don't have to render the bg all the time, 
only on resize. 

I'll also be able to render just a single widget directly on a bitmap, 
using AGG clipping to that widget.
BrianH:
28-Jan-2011
See http://issue.cc/r3/1573for remove-duplicates and the reason 
it's not as good an idea as you might think. It turns out that the 
set functions have to allocate a new series to do their work (for 
series larger than trivial size), even if they were changed to modify 
in place. So you can't avoid the allocation; might as well benefit 
from it.
BrianH:
28-Jan-2011
And it's order safe.
Maxim:
28-Jan-2011
and if the function does this internally in C it will still be MUCH 
faster  which is why I'd much prefer having refinements for in-place 
functioning of all set functions.
Maxim:
28-Jan-2011
I know, but everything that's done in the C side will save on speed 
and memory, since the C doesn't have to go through the GC and all 
that.  in tight loops and real-time continual processing, these details 
make a big difference in overall smoothness of the app.
Maxim:
28-Jan-2011
which is why its preferable to do it there anyways... and the fact 
that we only have one function name to remember for the two versions 
is also a big deal for simplicitie's sake.
BrianH:
28-Jan-2011
No, I mean that modifying functions should have verb names and non-modifying, 
not-for-effect functions shouldn't. So for the current set functions 
UNIQUE, DIFFERENCE and UNION have good names, but EXCLUDE should 
be called EXCLUDING and INTERSECT should be called INTERSECTING; 
this gets reversed for modifying versions :)
BrianH:
28-Jan-2011
Doesn't matter. The non-modifying version of APPEND is called JOIN, 
and both of those are verbs.
Maxim:
28-Jan-2011
I always wondered why it reduced... I find that very annoying... 
many times I'd use it and It ends up mucking up my data, so I just 
almost never use it.
BrianH:
28-Jan-2011
I mostly use REJOIN and AJOIN instead of JOIN, or maybe APPEND COPY.
BrianH:
28-Jan-2011
The only difference between the DEDUPLICATE code in the ticket and 
a native version is that the auxiliary data could be deleted immediately 
after use instead of at the next GC run.
Maxim:
28-Jan-2011
and the data is managed directly by C, not by the interpreter, which 
is faster for sure.
Maxim:
28-Jan-2011
yes, but the extra data used to build it as a mezz, including the 
stack frames and stuff is prevented.   


I know I'm being picky here.  but we're doing a detailed analysis.. 
 :-)
Ladislav:
28-Jan-2011
The only difference between the DEDUPLICATE code in the ticket and 
a native version is that the auxiliary data could be deleted immediately 
after use instead of at the next GC run.
 - that would be inefficient as well
BrianH:
28-Jan-2011
INSERT, CLEAR and UNIQUE are already native, so the actual time-consuming 
portions are already optimized. The only overhead you would be reducing 
by making DEDUPLICATE native is constant per function call, and freeing 
the memory immediately just takes a little pressure off the GC at 
collection time. You don't get as much benefit as adding /into to 
REDUCE and COMPOSE gave, but it might be worth adding as a /no-copy 
option, or just as useful to add as a library function.
Maxim:
28-Jan-2011
right now the GC is very cumbersome. it waits for it to have 3-5MB 
before working. and it can take a noticeable amount of time to do 
when there is a lot of ram.  I've had it freeze for a second in some 
apps.

everything we can do to prevent memory being scanned by the GC is 
a good thing.
BrianH:
28-Jan-2011
Mark and sweep only scans the referenced data, not the unreferenced 
data, but adding a lot of unreferenced data makes the GC run more 
often.
Ladislav:
28-Jan-2011
right now the GC is very cumbersome. it waits for it to have 3-5MB 
before working. and it can take a noticeable amount of time to do 
when there is a lot of ram.  I've had it freeze for a second in some 
apps.

 - what exactly does the GC have in common with the "Deduplicate issue"?
BrianH:
28-Jan-2011
The subject of the set function implementation, or the GC implementation 
and how it compares to direct deallocation? If the latter then no.
Ladislav:
28-Jan-2011
I meant the note more to Max, and it was about the set function
Ladislav:
28-Jan-2011
The GC is not a slow approach to the garbage collection. The main 
problem is, that it is "unpredictable", and possibly producing delays, 
when other processing stops. (but that does not mean, that immediate 
collection would be faster)
Maxim:
28-Jan-2011
just adding a generational system to the GC would help a lot.  I've 
read that some systems also use reference counting and mark and sweep 
together to provide better performance on data which is highly subject 
to volatility.
Maxim:
29-Jan-2011
the average test is that things done in extensions are at least 10 
times faster, and Carl has shown a few examples which where 30 x 
faster.  really Lad, there is no comparison.
Ladislav:
29-Jan-2011
To find out what is wrong, just write an "in place" version of Deduplicate 
in Rebol, divide the time needed to deduplicate a 300 element series 
by 30, and compare to the algorithm (in Rebol again) allowed to use 
auxiliary data.
Ladislav:
29-Jan-2011
Or, to make it even easier, just use an "in place deduplicate" written 
in Rebol, divide the time to deduplicate a 300 element series by 
30, and compare to the time Unique takes (Unique uses aux data, i.e. 
a more efficient algorithm)
Oldes:
30-Jan-2011
You talk about is so much that someone could write an extension in 
the same time and give a real prove:) What I can say, using additional 
serie is a big speed enancement. At least it was when I was doing 
colorizer.
Ladislav:
30-Jan-2011
You talk about is so much that someone could write an extension in 
the same time and give a real prove:) What I can say, using additional 
serie is a big speed enancement.

 - actually, it has been proven already, just look at the performance 
 of the UNIQUE, etc. functions
BrianH:
31-Jan-2011
ALIAS function removed in the next version; see http://issue.cc/r3/1163
http://issue.cc/r3/1164http://issue.cc/r3/1165and http://issue.cc/r3/1835
for details. Also, http://issue.cc/r3/1212dismissed as unnecessary 
now.
BrianH:
31-Jan-2011
A report like "Words are not case insensitive in extension word blocks" 
would help a lot. Carl has been bug fixing lately, and that includes 
extension bugs.
Gregg:
16-Feb-2011
I have my own versions of the series comparison funcs Max proposed 
on 27-Jan. My versions of SHORTEST and LONGEST take a block of series 
values, and return the given item by length. I don't use them much, 
but there are times they are nice to have.
Marco:
13-Mar-2011
Every refinement with an optional value should accept also a none! 
(implied ?)
eg.
sum: func [
    "Return the sum of two numbers."
    arg1 [number!] "first number"
    arg2 [number!] "second number"
    /times "multiply the result"
    amount [number! none!] "how many times"
][	; test argument, NOT refinement
    either amount [arg1 + arg2 * amount][arg1 + arg2]
	; or if not amount [amount: 1] arg1 + arg2 * amount
	; or amount: any [amount 1] arg1 + arg2 * amount
]
so it would be possible to do:
summed: sum/times 1 2 (something) ;also if (something) is none

and obviously also:
summed: sum 1 2

instead of:
summed: either (something) [sum/time 1 2 (something)][sum 1 2]
Marco:
13-Mar-2011
instead of my above
summed: either (something) [sum/time 1 2 (something)][sum 1 2]
I wish I could use
summed: sum/times 1 2 (something) ;also if (something) is none

for every Rebol function and also for my own functions preferably 
without explicitly add none! as a required type.
Andreas:
13-Mar-2011
And how would you pass NONE as a refinement argument value?
Oldes:
14-Mar-2011
And finally, in your case, why you must use the refinement, when 
you don't use it?

my-sum: func[
	arg1 [integer!]
	arg2 [integer!]
	amount [none! integer!]
][
	arg1 + arg2 * any [amount 1]
]
>> my-sum 1 2 3
== 9
>> my-sum 1 2 none
== 3
Marco:
17-Mar-2011
in the foo function /something is a refinement and not an _optional_ 
refinement. In your my-sum function amount is not a refinment and 
my-sum 1 2 none == 3 is correct. What I am saying is that the extra 
none! adds polymorphism (but i have not investigated that too much 
so i could be mistaken), so you can write: sum 1 2 or sum 1 2 3 or 
sum 1 2 none without checking for none before calling the function.
Kaj:
17-Mar-2011
And you can't write SUM 1 2 without a refinement if you have an extra 
AMOUNT parameter. The number of parameters is fixed in REBOL. The 
way to have optional parameters is to use refinements (or a BLOCK! 
argument)
Group: !REBOL3 Parse ... REBOL3 Parse [web-public]
BrianH:
14-Jan-2011
- A recovering loader for REBOL source (using TRANSCODE and a manually 
created stack)

- A dialect compiler for R3 PARSE rules that generates the R2 workaround 
code
Steeve:
14-Jan-2011
But some ideas stay relevant.
Incremental parsing.
Allowing incremental syntactical checkings.

Allowing fast modification of the document (sort of DOM capability).
Only what is modified indue styles reconstruction and rendering.
shadwolf:
14-Jan-2011
could this be linked with a wiki page in rebolfrance.info and alimented 
with the tips and tricks from here ?
BrianH:
14-Jan-2011
Does anyone know if there is any difference between the REJECT and 
FAIL operations? Was REJECT just added as a FAIL synonym because 
ACCEPT was added?
Steeve:
14-Jan-2011
The internal representation of the Document is the hard point.
Incremental parsing means caching some infos during parsing.

What to cache, what not to cache, and in which form, that is the 
question.
Steeve:
14-Jan-2011
same question with accept and break.
BrianH:
14-Jan-2011
ACCEPT and BREAK break from loops, though loops might not mean what 
you think they mean.
Steeve:
14-Jan-2011
so far, the syntactical rules for rebol scripts.



https://docs.google.com/document/pub?id=1kUiZxvzKTgpp1vL2f854W-8OfExbwT76QUlDnUHTNeA


(Not produced the transcode rule though)

But it's enough to perform the prefetch of a document and to construct 
the internal representation.
BrianH:
14-Jan-2011
According to the original proposal and what Carl said when he implemented 
it,
>> parse [1] [thru [1 2 1]]
== true
should be a syntax error too.
Ladislav:
14-Jan-2011
Difference between REJECT and FAIL:

>>  parse [] [while [reject]]
== false

>> parse [] [while [fail]]
== true
Steeve:
14-Jan-2011
THRU and TO have lot of restrictions.
You can't use them with charsets.

you can't use nested sub-rules or anything else than simple values.
Ladislav:
14-Jan-2011
as opposed to that, ACCEPT and BREAK don't differ
Ladislav:
14-Jan-2011
Steeve instead of TO RULE use WHILE [at rule break | skip | reject], 
and instead of THRU RULE use WHILE [rule break | skip | reject]
45701 / 4860612345...456457[458] 459460...483484485486487