• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary

worldhits
r4wp5907
r3wp58701
total:64608

results window for this page: [start: 47501 end: 47600]

world-name: r3wp

Group: !REBOL3-OLD1 ... [web-public]
BrianH:
23-Jun-2009
You can't bind to a map! because the keys are handled differently, 
with the keys going away when assigned none.
Maxim:
23-Jun-2009
yep  :-)   I guess asking for a word in a map which isn't there returns 
none  ?
Maxim:
23-Jun-2009
a trick question... are set and get words enabled? and are they differentiated 
from lit-words?
Maxim:
23-Jun-2009
to me an unbound word is a lit-word in the context of the map...
Maxim:
23-Jun-2009
like find with blocks is very fast.  it only maps the exact same 
object, not a similar object AFAIK.
BrianH:
24-Jun-2009
The advantage of map keys is that they don't have to be the SAME? 
as the key, they can just be EQUAL? and the hashing speeds things 
up. With objects you either have a reference to it or you don't. 
If you have a reference, you can extend it with new fields, which 
can refer to the values you want to associate with the object. If 
you don't have a reference you need to specify a key within the object 
that you are searching for, and hashing won't help there, but we're 
going to add a mezzanine to do that kind of searching.
BrianH:
24-Jun-2009
I believe there is a blog conversation about that now :)
Maxim:
24-Jun-2009
yep... I just meant... it has to be very precisely/explicitely documented 
 :-)


btw, I added other qualitative word examples for equality-likeness-sameness 
on the post...  might give you ideas for a better scalling of the 
four levels
Ladislav:
24-Jun-2009
Sunanda: Correction! I told you, that it was the largest INTEGER! 
prime in R3, I did not claim a larger one is not findable
Pekr:
24-Jun-2009
back from business trip. So what's the binary conversion status? 
Is it just a test how it turns out, or are we definitely going that 
direction?
Pekr:
25-Jun-2009
Well, but then there are those, who's opinion is irrelevant, as mine 
:-) As for equality, I think you are directing a good direction, 
as well as for copying. I do agree with Max, that we need max flexibility 
to decide what gets copied and what not. Some things can then be 
pre-set, other set by users at will.  We need good memory management 
...
Pekr:
25-Jun-2009
Well, if you agree, it is cool. Make a decision and move on - we 
have still tonnes of issues to solve :-) You know - half functional 
modules, no concurrency, no upper layer networking, still no plugins, 
no GUI (and Flash is going to be everywhere by the end of the year) 
... that seems like a really lot of catch-up we need to play ...
BrianH:
25-Jun-2009
Max is missing that I already proposed a more flexible solution than 
he is proposing. He is so focused on the behavior of MAKE that he 
missed that I also made proposals for COPY andf COPY/deep.
Izkata:
25-Jun-2009
I haven't been following too closely, but I'm against copy making 
a new object from an old one.  I'd have to look at the suggestions 
more to have more comments
BrianH:
25-Jun-2009
However, you can't have MAKE do a shallow copy: Spec blocks would 
explode in size. So you need another way to do a shallow copy.
BrianH:
25-Jun-2009
And if you have MAKE do a deep copy (as it does now in R3), memory 
explodes and you can't do graph references. So you need a way to 
make a deep copy too, in case you need to.
BrianH:
25-Jun-2009
Keep in mind that BIND/copy is more efficient in R3, and in theory 
doesn't even need to be a copy: It could be copy-on-write.,
Izkata:
25-Jun-2009
(I went to sleep right after my last comment, hence the lag)


On equality:  Ladislav's idea sounds good, but I don't exactly see 
the point for having the bottom level in there.  What use would it 
be for?  How/where would the range for "approximate" be set?


On objects:  I think I may have misread it earlier - my comment was, 
"copy/deep [object1 object2]" should return a new block that still 
points at object1 and object2, rather than creating two duplicates 
of them.  But from the comments it certainly doesn't sound like I 
was thinking.  (Although make/deep or "copy/deep/types foo [object! 
series!]" suggestions sound like they may help..?)
BrianH:
25-Jun-2009
Adding a refinement to MAKE would be a bad idea, since it is a very 
low-level action that all datatypes support. COPY/types though... 
:)
BrianH:
25-Jun-2009
EQUAL? in R2 is level 1. Changing it to level 2 would break a lot 
of code, particularly for word equivalence :(
BrianH:
25-Jun-2009
You need a NOT-EQUAL? action so you can have a != operator. Every 
op! maps to an action!, no exceptions.
Izkata:
25-Jun-2009
Mmk, I must have just never run across a case where I would have 
seen that by coincidence.
PeterWood:
25-Jun-2009
How do you convert a Map! to a Block! in R3? The obvious way doesn't 
seem to work:

>> a: map! [a 1 b 1 c 1]

== [a 1 b 1 c 1]

>> b: to block! a
== [map!]
Sunanda:
25-Jun-2009
You are missing a TO (or  a MAKE)
    a: TO map! [a 1 b 1 c 1]
    to-block a
    == [a 1 b 1 c 1]
Maxim:
25-Jun-2009
version A62 just has to be a special version, it was released on 
my birthday  :-)
Ladislav:
30-Jun-2009
A question from Carl: 

Should #877 just cause an error (infinite cycle):

a: copy []
insert/only a a
copy/deep a
Anton:
30-Jun-2009
I think this is just about performance. SAME? has a simple job to 
do to see if two values are the same. EQUAL? investigates in more 
depth, and then discovers the range problem. This seems like a good 
implementation and I like the current behaviour.
Henrik:
30-Jun-2009
I'm considering the concept of neutral values: Empty blocks, empty 
strings, zero, true and generally what is default made with MAKE 
<type> [].

It comes from typing this alot:

	all [
		val
		block? val
		empty? val
	]

which would be equivalent to:

	neutral? val

It may be a bad idea.
sqlab:
30-Jun-2009
Why should  equal? b b yield a error?

Just because it gives back an error in R2, but there also b alone 
gives back an error.
The current behaviour in R3 is much better.
BrianH:
30-Jun-2009
I'm not convinced that the lack of an error with the straight b reference 
is good either. It seems like a good error to throw.
Ladislav:
30-Jun-2009
If we examine the "nature" of the implementation of REBOL blocks, 
then we come to the conclusion (at least I do), that the "abstraction" 
is as follows: (at least when we examine the PICK function): a block 
is an "essentially unlimited" series of values, the majority of them 
are  #[none] s, except for a limited "segment", which may "contain" 
other values as well
BrianH:
30-Jun-2009
There is a lot of correct code that would assume that
>>greater-or-equal? length? head b index? b

If this is ever not the case and I try to retrieve a value from that 
reference before that condition is true again, that is a serious 
error. If you fill in the missing data before you attempt to retrieve 
anything, not an error.
Ladislav:
30-Jun-2009
PICK happily allows you to examine any position, so a code that works 
in a manner incompatible with PICK is violating the block design 
principle IMO
Maxim:
30-Jun-2009
henrik:  the neutral? is very usefull, python uses that as the false 
values.


when everything is coded using this it simplifies a lot of code, 
even more than using none.
Maxim:
30-Jun-2009
its just different allows different optimisation of conditionals. 
 neutral? can be very usefull, especially for GUI handling code... 
where you usually don't care for the type, but only if a value is 
meaningfull.
Henrik:
30-Jun-2009
I have not thought it through that much, other than figuring there 
would have to be a way to shorten that code to one step. I have compiled 
a list of neutral values for all types, that are capable of producing 
neutral values. Some can't, and I wonder what the response to NEUTRAL? 
would be there.
Henrik:
30-Jun-2009
the lack of datatype screening might be a problem.
BrianH:
30-Jun-2009
It could take a datatype/typeset parameter (probably the first parameter 
of two).
BrianH:
30-Jun-2009
Internally you could even convert the datatype! to a typeset! and 
use FIND; since typesets are immediate there would be no overhead.
BrianH:
1-Jul-2009
I don't understand: "I never use them, as I worry they are slow (mezzanine)."

REBOL doesn't really have multidimensional arrays. There is an ARRAY 
function that creates nested blocks (which is as close as C gets 
to multidimentional arrays too), but once created they aren't mezzanine, 
they're a data structure.
BrianH:
1-Jul-2009
Once the data structure is in memory, it doesn't matter whether it 
was created with a mezzanine or a native - all that matters is what 
kind of data structure it is. The nested block style is better for 
some things (data structure manipulation, flexibility) but worse 
at others (vector math). You make your tradeoffs. At least in R3 
you have a choice :)
Pekr:
1-Jul-2009
Well, summer is going to be a slow time anyway, for many of us ...
Ladislav:
1-Jul-2009
this is a test that succeeds in R2 as well as in R3; but, is it really 
supposed to?

		a-value: first ['a/b]
		parse :a-value [b-value:]
		not equal? :a-value :b-value
BrianH:
1-Jul-2009
I like the idea of non-datatype-specific EQUAL? considering datatypes 
in the any-string! family to be equal to each other, and also the 
any-block!, any-word! and number! families. I'm a little wary of 
the potential breakage, though have no idea of the scale of it. Is 
anyone aware of any code that wouuld be broken if EQUAL?, =, NOT-EQUAL?, 
<> and != changed in this way?
Geomol:
1-Jul-2009
I find, the first version, with both endpoints (0.0 and the input 
value) as possible output with half frequency than other possible 
values in between, gives most sense. I think of a number line going 
from 0.0 to the given value, and a random number is picked on the 
line.
Ladislav:
1-Jul-2009
just for the record: a variant yielding all values in the interval 
including the endpoints with equal frequency is possible too (just 
the generating formula is a bit different)
Geomol:
1-Jul-2009
RANDOM is a distribution. Getting random integers, the mean value 
is well defined as:

(max + 1) / 2

So e.g.

random 10

will give a mean of 5.5. What is the mean of

random 10.0
or
random 100.0
PeterWood:
1-Jul-2009
Ladislav: I reported bug #3518 in Rambo mainly because the behaviour 
of the '= function is not consistent. My thinking was if 1 = 1.0 
why doesn't  #"a" = "a"?


It appears that the '= function acts as the '== function unless the 
types are number!. 


I have come to accept that Rebol has been designed pragmatically 
and, understandably, may be inconsistent at times. I thing this makes 
the need for accurate documentation essential. I would hope that 
the function help for = can be changed to accurately reflect the 
functions behaviour.
PeterWood:
1-Jul-2009
I believe that there needs to be some restriction on the datatypes 
on which the '= function will work. It seems to make no sense to 
comparer a URL! with an email! (Unless you define a URL! to be equla 
to  an email! if they refer to the same ip address or domain name. 
Perhaps that's something for same?).


It's harder to say whether an issue! can be equal to a binary! but 
waht about an integer! with a binary!?
Maxim:
1-Jul-2009
wouldn't it be cool to load a rebol instance as a plugin within rebol? 
 :-)

this could be the basis for an orthogonal REBOL kernel  :-)
Anton:
1-Jul-2009
Peter, are you sure you would never want to compare an email with 
a url?
What about urls like this?
http://some.dom/submit?email=[somebody-:-somewhere-:-net]


I might want to see if a given email can be found in the query string.
Anton:
2-Jul-2009
Ladislav, your "parsing a lit-path" example above looks ok to me 
for the proposed ALIKE?/SIMILAR? operator, and EQUAL? if it's been 
decided that EQUAL? remains just as ALIKE?/SIMILAR?, but not ok if 
EQUAL? is refitted to name its purpose more accurately (ie. EQUAL? 
becomes more strict).
BrianH:
2-Jul-2009
Peter, in response to the suggestions in your last message:

- issue! = binary! : not good, at least in R3. Perhaps issue! = to-hex 
binary!

- integer! = binary! : not good, at least in R3. Use integer! = to-integer 
binary!


Actually, anything-but-binary! = binary! is a bad idea in R3, since 
encodings aren't assumed. The TO whatever conversion actions are 
good for establishing what you intend the binary! to mean though, 
especially since extra bytes are ignored - this allows binary streams.
Anton:
2-Jul-2009
BrianH, oh well, it's a pity.
Geomol:
2-Jul-2009
With the new random, 0.0 will also get a lot more hits than numbers 
close to 0.0. It's because the distance between different decimals 
is small with number close to zero, while the distance gets larger 
and larger with higher and higher numbers. (Because of IEEE 754 implementation.) 
So the max value will get a lot more hits than a small number, and 
all hits on the max value gets converted to 0.0.

I wouldn't use the new random function with decimals.
Geomol:
2-Jul-2009
If you do a lot of

random 2 ** 300

, the mean will be a lot below 2 ** 300 / 2.
Geomol:
2-Jul-2009
You're doing a good job, I just don't agree with Carl's view on this.
Ladislav:
2-Jul-2009
hmm, but I am still not sure, you would have to use a denormalized 
number as an argument to be able to detect the difference
Geomol:
2-Jul-2009
If you did e.g.

random 10.0


many many times, wouldn't you get a result, where 0.0 has a lot of 
hits, the first number higher than 0.0 will get close to zero hits, 
and then the number of hits will grow up to the number just below 
10.0, which will have almost as many hits as 0.0?
Ladislav:
2-Jul-2009
(if it does not work that way, then there is a bug)
Geomol:
2-Jul-2009
So when you do the calculation going from a random integer and divide 
to get a decimal, you get a result between zero and 10.0. If the 
result is close to zero, there are many many numbers to give the 
result, while if the result is close to 10.0, there are much fewer 
possible numbers to give the result.
Geomol:
2-Jul-2009
You take a lot of hits for the max value and convert them to 0.0.
Geomol:
2-Jul-2009
It could take a long long time to run an example showing this.
Ladislav:
2-Jul-2009
but, anyway, I am still pretty sure, that such a difference actually 
is undetectable
Geomol:
2-Jul-2009
A test could be to run
random 1.0

many many times and save the results coming close to zero. Like my 
other test, checking if the result is within the first 1024 number 
close to zero. After running the test many times, a picture can be 
seen.
Geomol:
2-Jul-2009
This mean, there is as many different possible decimals between 0.0 
and 1.5 as between 1.5 and the highest possible 64-bit decimal, when 
talking floating points in a computer using the 64-bit IEEE 754 standard.
Ladislav:
2-Jul-2009
how about this, do you think, that there is a problem with it too?
two**62: to integer! 2 ** 62
two**9: to integer! 2 ** 9
two**53: to integer! 2 ** 53
r-uni: func [x [decimal!] /local d] [
	d: random two**62
	; transform to 0..2**53-1
	d: d // two**53
	; transform to 0.0 .. x (0.0 inclusive, x exclusive)
	d / two**53 * x
]
Ladislav:
2-Jul-2009
a modification:

two**62: to integer! 2 ** 62
two**53: to integer! 2 ** 53
two**53+1: two**53 + 1
r-uni: func [x [decimal!] /local d] [
	d: random two**62
	; transform to 0..2**53
	d: d // two**53+1
	; transform to 0.0 .. x (0.0 inclusive, x inclusive)
	d / two**53 * x
]
Ladislav:
2-Jul-2009
anyway, this is always a problem, when we try to generate numbers 
in an "exotic" interval, not in the usual [0.0 .. 1.0]
Geomol:
2-Jul-2009
But!? :-)

>> to integer! 2 ** 62 - 1
== 4611686018427387904
>> to integer! 2 ** 62
== 4611686018427387904

Isn't that a bug?
Geomol:
3-Jul-2009
for random 1.0 you cannot find any irregularities, there aren't any


I think, there are. Decimals with a certain exponent are equal spaced, 
but there are many different exponents involved going from 0.0 to 
1.0. The first normalized decimal is:

>> to-decimal #{0010 0000 0000 0000}
== 2.2250738585072e-308

The number with the next exponent is:

>> to-decimal #{0020 0000 0000 0000}
== 4.4501477170144e-308

I can take the difference:


>> (to-decimal #{0020 0000 0000 0000}) - to-decimal #{0010 0000 0000 
0000}
== 2.2250738585072e-308

and see the difference double with every new exponent:


>> (to-decimal #{0030 0000 0000 0000}) - to-decimal #{0020 0000 0000 
0000}
== 4.4501477170144e-308

>> (to-decimal #{0040 0000 0000 0000}) - to-decimal #{0030 0000 0000 
0000}
== 8.90029543402881e-308

>> (to-decimal #{0050 0000 0000 0000}) - to-decimal #{0040 0000 0000 
0000}
== 1.78005908680576e-307

So doing
random 1.0

many times with the current random function will favorize 0.0 a great 
deal. The consequence is, 0.0 will come out many more times than 
the first possible numbers just above 0.0, and the mean value will 
be a lot lower than 0.5.
Geomol:
3-Jul-2009
The space between possible decimals around 1.0 is:


>> (to-decimal #{3ff0 0000 0000 0000}) - to-decimal #{3fef ffff ffff 
ffff}
== 1.11022302462516e-16

The space between possible decimals around 0.0 is:

>> to-decimal #{0000 0000 0000 0001}
== 4.94065645841247e-324


That's a huge difference. So it'll give a strange picture, if converting 
the max output of random (1.0 in this case) to 0.0.
Geomol:
3-Jul-2009
It should be clear, that it's a bad idea to move the outcome giving 
1.0 to 0.0, as is done now with the current RANDOM function in R3.
Geomol:
3-Jul-2009
Oh! Yes, I didn't have that in mind. So the smallest result larger 
than zero from RANDOM 1 is:

>> tt62: to integer! 2 ** 62
>> 1 / tt62
== 2.16840434497101e-19

It's still smaller than 1.11022302462516e-16

Can RANDOM 1.0 produce a result equal to 1.0 - 1.11022302462516e-16 
?
Ladislav:
3-Jul-2009
yes. If the difference is detectable by a test, then we should change 
the implementation
Paul:
3-Jul-2009
I found some interesting observations in working with Random data 
recently.  I was mostly working with the RandomMilliondigit.bin file 
data that is used to test compression algorithms.  It exhibits a 
characteristic in that repetitious data is ascends in almost a 1-2 
ratio.
BrianH:
4-Jul-2009
It's a recent bug fix. Wait, isn't STRICT-EQUAL? supposed to be EQUIVALENT? 
plus a datatype check? Then if EQUIVALENT? already does zone math 
I vote that STRICT-EQUAL? do zone math and SAME? do precise equivalence.
BrianH:
4-Jul-2009
Then this seems like a good add, unless it is reserved for =?
BrianH:
4-Jul-2009
Oh, the t/zone is really d/zone - it's a date! thing, recently fixed.
BrianH:
4-Jul-2009
Ladislav, check bug#1049 - I added a new comment. We need to discuss 
this.
BrianH:
4-Jul-2009
Mold does cycle detectionn, but not cycle comparison. Mold gives 
up when it detects a cycle.
BrianH:
4-Jul-2009
These would all need to be considered EQUAL?:
>> a: [z] append/only a a
== [z [...]]
>> b: [z [z]] append/only second b second b
== [z [...]]
>> c: [z [z]] append/only second c c
== [z [z [...]]]
BrianH:
4-Jul-2009
Cycle detection would be good enough to let you throw an informative 
error though, without getting as far as a stack overflow.
BrianH:
4-Jul-2009
Ladislav, the cyclic structures above are not the same structure, 
but the results would need to be considered EQUAL?. Unless we declare 
that cyclic structures need to have the same structure to be declared 
equal - then we can use a stack of ref/index for the structure detection.
Ladislav:
4-Jul-2009
try this in R3


>> do http://www.rebol.org/download-a-script.r?script-name=identity.r
Script: "Identity.r" Version: none Date: 7-May-2009/8:59:07+2:00
Script: "Contexts" Version: none Date: 12-May-2006/15:58+2:00
Script: "Closure" Version: none Date: 11-May-2009/19:13:51+2:00
Script: "Apply" Version: none Date: 7-May-2009/9:25:31+2:00
== make function! [[
    "finds out, if the VALUE is mutable"
    value [any-type!] /local r
][
    parse head insert/only copy [] get/any 'value [

        any-function! | error! | object! | port! | series! | bitset! |
        struct! | set r any-word! (
            either bind? :r [r: none] [r: [skip]]
        ) r
    ]
]]

>> a: [z] append/only a a
== [z [...]]

>> b: [z [z]] append/only second b second b
== [z [...]]

>> c: [z [z]] append/only second c c
== [z [z [...]]]

>> equal-state? a a
== true

>> equal-state? a b
== true

>> equal-state? a c
== true
BrianH:
4-Jul-2009
Mere cycle detection is only good enough to throw a good error. To 
determine equivalence you need to compare different cycles to see 
if  they are the same shape. Or topologically equivalent if you are 
ambitious.
Ladislav:
4-Jul-2009
do http://www.rebol.org/download-a-script.r?script-name=identity.r
BrianH:
4-Jul-2009
There's a lot of code in there, not all of it relavant.
Ladislav:
4-Jul-2009
yes, lots of those, no adjustment for R3, it is a surprise that it 
works at all
BrianH:
4-Jul-2009
Words in R3 preserve the case they are typed in, but are considered 
equivalent on a case-insensitive basis.
BrianH:
4-Jul-2009
In any case, for R3 line 502:
            if strict-not-equal? mold :a mold :b [return false]
should probably be this:
            if not-equal? mold :a mold :b [return false]
BrianH:
4-Jul-2009
Unless you want
>> equal-state? [A] [a]
== false
Ladislav:
4-Jul-2009
but, if you want a model, I can write one
Ladislav:
4-Jul-2009
(or is it just one bit? - in that case it indeed is a problem)
BrianH:
4-Jul-2009
You would have to do something similar to a simplified EQUAL-STATE? 
even in a C version. There will never be enough bits for marking 
because every concurrent instance of your comparison would need its 
own set. Having thread-local done lists would work though.
Ladislav:
4-Jul-2009
concurrently running instance of EQUAL? - yes, understood, that would 
be impossible, but such a thing can be made "atomic/protected", can't 
it?
BrianH:
4-Jul-2009
OK, EQUAL-STATE? is too strict in another way: It also considers 
values that precede the series references. I'm going to figure out 
if it will still work if changed to use the level of equivalence 
expected from EQUAL?. Note:
>> equal? next [a a] next [b a]
== true
>> equal? next [a a] next 'b/a
== false
Not sure about that last one - will it be true in R3?
Ladislav:
4-Jul-2009
yes, re strictness, it will probably be found too strict in a number 
of ways
BrianH:
4-Jul-2009
What about equal? [a b] 'a/b ?
Ladislav:
4-Jul-2009
(it was certainly not a prototype of any specific member of the current 
equality coparison hierarchy)
47501 / 6460812345...474475[476] 477478...643644645646647