r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Core] Discuss core issues

Cyphre
26-Jul-2011
[1973]
I'm not sure which behaviour is better though. Ie copying whole string 
vs. just the 'ranges' defined by the series values in the block.
Steeve
26-Jul-2011
[1974]
Don't know, but it would be a problem now, because I use this behavior 
to hide data in blocks (like absolute identifiers)
Cyphre
26-Jul-2011
[1975]
My guess is the current copy/deep behaviour can consume too much 
memory if the strings(or whatever data) are too big.
Steeve
26-Jul-2011
[1976]
You have a point here
Cyphre
26-Jul-2011
[1977]
Here is another way how to ilustrate it clearly:
 
>> s: "abc"
== "abc"
>> mold/all copy/deep reduce [next s]
== {[#[string! "abc" 2]]}
>> mold/all copy next s
== {"bc"}
Steeve
26-Jul-2011
[1978]
is that true only for strings ?
Cyphre
26-Jul-2011
[1979x2]
I think it works same on all series.
>> s: [1 2 3]
== [1 2 3]
>> mold/all x: copy reduce [next s]
== "[#[block![1 2 3]2]]"
>> mold/all y: copy/deep reduce [next s]
== "[#[block![1 2 3]2]]"
>> mold/all z: copy next s
== "[2 3]"
>> same? x/1 next s
== true
>> same? y/1 next s
== false
>> same? z/1 next s
== false
>>
Steeve
26-Jul-2011
[1981x3]
Ok, I agree, it's a problem for 
>> same? y/1 next s
== false
the hidden part should not have beeb copied
Good point Geomol, you found a new ticket
Cyphre
26-Jul-2011
[1984]
yes, the whole serie is copied from HEAD in the copy/deep case...not 
sure if Carl does that intentionally but could be a bug that can 
lead to unwanted bigger memory footprint in some scripts imo.
Maxim
26-Jul-2011
[1985]
Ladislav, typical serialization only takes into account the data 
traversed (in the order it meets it).  in the above example, if the 
block 2 was encountered first, then it would look like this:


#[block  2  [ val3  #[block  1 [ val1 val2 ]]  val4]  ; shared block 
containing another shared block
#[block  1 ]


(I kept the ids, just to make the comparisons easier, but it would 
have effectively swapped 1 and 2)


I built such a system for a project a long time ago (I lost the code, 
unfortunately).   It stored graphics in block formats, and used shared 
sub-blocks in many objects .  So  I built a reference-safe load and 
save which completely made the above invisible... I used tags to 
refer to blocks, and  a tag-based block serialization which was very 
similar to the above.  it was quite specific to my application, but 
it could have been generalized.   I did have to make it a two-step 
process.  if it where internal, it could be done in a single pass.


 a serialization can be made more persistent (accross several serializations 
 and even sessions), but that is not what REBOL is built for, nor 
 would it be very useful within REBOL's semantics IMHO.
Ladislav
26-Jul-2011
[1986]
So, shall I understand it, that you actually propose to use this 
kind format for cyclic blocks?
Maxim
26-Jul-2011
[1987x2]
unless someone finds a better alternative, I don't see anything wrong 
with this syntax... maybe it needs a bit more components (like keeping 
tabs on the index of the block).  
in fact  maybe this is even better:

 #[block  #1 2 [ val1 val2 ]] 


here an issue type is used for the uid and the integer is used as 
the block index.
since this is just managed within LOADing, it shouldn't force the 
word count to increase, its just a notation.
Gabriele
27-Jul-2011
[1989]
I solved the problem of serializing circular blocks etc. a long time 
ago, by adding DISMANTLE and REBUILD. Eg. see http://www.colellachiara.com/soft/YourValues/libs/persist.r

So, it can even be done as a mezz layer above MOLD and LOAD.
Ladislav
27-Jul-2011
[1990]
I guess, that you meant the DISMANTLE-BLOCK and REBUILD-BLOCK functions. 
Nevertheless, the source code does not contain the definition of 
the CUSTOM-TYPE? function.
Gabriele
28-Jul-2011
[1991]
No, dismantle and rebuild are not defined there specifically, they 
are defined as actions for custom types (eg. pblock! etc.). But they 
are based on those two IIRC. http://www.colellachiara.com/soft/YourValues/
Pekr
28-Jul-2011
[1992]
I am doing some tel. report automatic checking, and I need an advice 
- how do I get easily substracted two time values going thru the 
midnight?

>> (fourth 29-7-2011/00:02:00) - (fourth 28-7-2011/23:52:00)
== -23:50


If I substract whole date value, it returns 1, it simply counts just 
dates, not time ...
Maxim
28-Jul-2011
[1993]
>> difference 29-7-2011/00:02:00 28-7-2011/23:52:00
== 0:10
GrahamC
28-Jul-2011
[1994]
this works to a point whereupon you start getting overflow
Sunanda
28-Jul-2011
[1995]
Overflow happens on dates around 68 years apart -- so probably safe 
for Petr's intended usage.
Maxim
28-Jul-2011
[1996]
68.04 years to be precise  ;-)  the seconds resolution in 31 bits

(power 2 31) / 60 / 60 / 24 / 365.25
Pekr
29-Jul-2011
[1997]
Max - thanks :-) I wonder why there is a difference between the 'difference 
and substraction ....
Geomol
29-Jul-2011
[1998x3]
You were subtracting 23 hours and 52 minutes from zero hours and 
2 minutes. That's -23:50.

With difference, the whole date plus time was given, then the difference 
is positive in this example.
Subtract on dates give you number of days. Subtract on times give 
you number of hours, minutes and seconds. Difference on dates (incl 
times) give you number of hours, minutes and seconds.
Maybe subtract on dates should only give days, if time is not given, 
else work as difference?
Henrik
4-Aug-2011
[2001]
I have a COPY/DEEP question:


When doing a COPY/DEEP, the first level requires that the argument 
must be a series!, port! or bitset! But when doing a /DEEP, I imagine 
that COPY traverses a series for elements and uses a different process 
than for the first level to determine whether the element can be 
copied. If it can't, it silently passes the value through.

Why can't we do that on the first level as well?
Gregg
4-Aug-2011
[2002]
Just so you can puT COPY everywhere, without caring about the type?
Henrik
4-Aug-2011
[2003]
I'm not really sure I want to, but I think it's interesting that 
there is a difference between the first level and other levels.
Gregg
4-Aug-2011
[2004]
It makes sense to me. Copy what can be copied. I like having the 
type check at the top level.
Henrik
4-Aug-2011
[2005]
I do dislike having to do:

either series? var [copy/deep var][var]


thing. Generally the programming needed for absolutely deep copy 
a value is too elaborate.
Gregg
4-Aug-2011
[2006]
I don't do that very often. If I did, I would just wrap it.
Geomol
4-Aug-2011
[2007x2]
I try to think of situations, where I would need to do that. I also 
don't think, I do it often. But it's an interesting idea. I try to 
come up with some situations, where I need to copy a value, that 
can sometimes be a series, sometimes not. Hmm...
Let's see: I need to copy a value, and it can be any value, so copy 
should work on any value. I don't see a flaw in that. I wonder, if 
it has come up before?
Steeve
4-Aug-2011
[2009x2]
Well I'm not against that, for sure. It's an,old debate.
I dislike annoying type checking errors messages. 

My point is that if a primitive function can't deal with a data type, 
the data should just pass thru silently.

But other peoples will object that error messages are favored for 
educationnal purposes... Uh !?

But year after year, I see that more and more functions have been 
corrected to be tolerant.
A good example is REDUCE
And yeah Geomol, COPY should act just like REDUCE.
if the input  is not a serie, it should just return it.
It would save lot of useless checking code in our script.
And lot of others functions should behave like that
Geomol
4-Aug-2011
[2011]
There is a lot of type checking in REBOL. I feel too much sometimes. 
Calling many functions involve two types of type checking, as I see 
it. Take ADD. Values can be: number pair char money date time tuple
If I try call ADD with some other type, I get an error:

>> add "a" 1

** Script Error: add expected value1 argument of type: number pair 
char money date time tuple

I can e.g. add pairs:

>> add 1x1 1x2    
== 2x3

and issues:

>> add 1.1.1 1.2.3
== 2.3.4

But I can't add a pair and an issue:

>> add 1x1 1.2.3
** Script Error: Expected one of: pair! - not: tuple!


So that's kinda two different type checking. First what the function 
takes as arguments, then what actually makes sense. If the user also 
need to make type checking, three checks are then involved. It could 
be, the first kind isn't done explicit for natives, and that it's 
kinda built in with the second kind. But for non-native functions, 
the first type checking is done:

>> f: func [v [integer!]] [v]
>> f "a"
** Script Error: f expected v argument of type: integer
Geomol
8-Aug-2011
[2012]
Does anybody know the reason, BACK was called that and not PREV?
Gregg
8-Aug-2011
[2013]
Perhaps because it's a complete word, rather than an abbreviation?
Geomol
8-Aug-2011
[2014x2]
Yeah, probably that. As not having english as my first language, 
I'm not sure, but isn't previous opposite of next and back opposite 
of forward? Or can back be opposite of next?
(Or maybe backward is opposite of forward to be real correct?)
Sunanda
8-Aug-2011
[2016]
English is probably too lenient. to say for sure :)
If you do a websearch for....
  "next back" button

....You'll see next and back are a common choice of names for navigating.
Gregg
8-Aug-2011
[2017]
'Previous is more formal, but depends on usage. e.g. "Go to the previous 
slide" versus "Go back one slide".
Geomol
8-Aug-2011
[2018]
Thanks!
Geomol
11-Aug-2011
[2019x2]
I came across a funny thing with binding. We've learnt, that the 
binding of words in a block is done, when the words are put into 
the block. This little example with two functions illustrate that:

blk: []

f: func [
	v
][
	insert blk 'v
	g v
]

g: func [
	v
][
	if v = 3 [exit]
	print ["v:" v]
	probe reduce blk
	g v + 1
]


F puts the word V into the block, then calls G, that has its own 
V. When G reduce the block, we see the original value of V from F, 
even if Gs V is changed:

>> f 1
v: 1
[1]
v: 2
[1]


Then I tried this next function, which puts V in the block in the 
start, then call itself with changed V value:

f: func [
	v
][
	if v = 3 [exit]
	if v = 1 [insert blk 'v]
	print ["v:" v]
	probe reduce blk
	f v + 1
]

>> clear blk
== []
>> f 1
v: 1
[1]
v: 2
[2]


This time, we see the latest version of V. The first V, which has 
the value 1, was put in the block, and it's still there somewhere 
in the system, but we get the V value from the latest F.

Is this a problem or a benefit, or is it just a bit strange?
Same behaviour in R2 and R3 btw.
Dockimbel
11-Aug-2011
[2021]
Nothing strange there. The local context indefinite extent is sometimes 
handy when deferred evaluation is required, but dangerous if side-effects 
are not controlled strictly. For example, it's handy for creating 
generators:

>> count: use [c][c: 0 does [c: c + 1]]
>> count
== 1
>> count
== 2
>> count
== 3
>> count
== 4
>> count
== 5
>> probe :count
func [][c: c + 1]
Cyphre
11-Aug-2011
[2022]
the binding of words in a block is done, when the words are put into 
the block
 I don't think this is true. See:
x: 10
blk: []
o: context [x: 20]
insert blk 'x
insert blk in o 'x
reduce blk
== [20 10]