r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[!REBOL3]

Steeve
25-Nov-2010
[6250]
Extend is a mezz, so that much be slower.

And I always try to avoid GC overheads due to excessive  usage of 
'reduce


Thats why I think bind/new is more capable especially, this form:
>set bin/new 'b obj 2
instead of:
>>bind/new 'b obj obj/b: 2
Andreas
25-Nov-2010
[6251x2]
Besides the readability/performance trade-off, things also start 
to differ when non-true? values are involved:
    obj: make object! []
    extend obj 'a none
    extend obj 'b false
    set bind/new 'c obj none
    set bind/new 'd obj false
    print mold obj
==>
    make object! [
        c: none
        d: false
    ]
(Which probably is a bug in EXTEND.)
Steeve
25-Nov-2010
[6253x2]
Nice catch
Hum...
Seems not a bug, but deliberate.
Sunanda
25-Nov-2010
[6255]
EXTEND is a wrapper for APPEND....So are there any practical advantages 
in using BIND/NEW rather than APPEND ?
    append obj [c: 3]
    == make object! [
        a: 1
        b: 2
        c: 3
    ]


I know right  now there are some differences when messing with PROTECT/HIDE, 
but there are many  CureCodes to go before we'll know if those differences 
are for real.
BrianH
25-Nov-2010
[6256x4]
I like the SET BIND/new trick. There has already been code in the 
mezzanines where such a trick would come in handy, but it had never 
occurred to me. Thanks Steeve!
The practical advantages of the SET BIND/new trick as opposed to 
APPEND is that you don't have to REDUCE a block to be appended. The 
point to the IF :val guard in EXTEND was to screen out false and 
none in GUI code, but that is likely no longer necessary. There was 
a blog where it was suggested that EXTEND be rewritten to work differently, 
but I think that various natives were enhanced instead. EXTEND is 
now a little anachronistic.
I think that the code where we wanted to screen out none in the GUI 
code is now using the map! type instead, which does this itself.
Btw Steeve, mezzanines aren't necessarily slower than natives, particularly 
in R3. It all depends on how much work is being done by the function. 
Interpreter overhead is really low in R3. Mezzanines can use more 
memory though, but we are enhancing the natives here and there to 
make it possible to lower the memory use of mezzanines - enhancements 
like the /into option.
Andreas
25-Nov-2010
[6260x4]
Based on some simple experiments I did some time ago, mezzanines 
are roughly 2.5 times slower than the aequivalent command (i.e. extension 
function).
That was for functions with minimal arithmetic, so that should basically 
be mostly the additional function call overhead.
Real
 natives should be a bit faster still.
Should probably dig out those experiments again to see exactly what 
I was measuring (or redo them), but that's the number that stuck 
with me :)
Kaj
25-Nov-2010
[6264]
That would indeed be very low overhead
BrianH
26-Nov-2010
[6265]
And even less for regular mezzanines, which generally contain calls 
to more advanced native functions rather than just minimal arithmetic.
Sunanda
26-Nov-2010
[6266]
Thanks for the EXTEND vs APPEND vs BIND discussion, guys. It can 
be useful to have different mechanisms that have their own default 
behaviours.


One use of BIND/NEW is that it allows words to be unset when an object 
is created:


    obj: context [bind/new 'word self]   ;; this works, and WORD is UNSET!

    obj: context [word: #[unset!]]       ;; this does not work
    * Script error: word: needs a value
    * Where: make context
    * Near: make object! blk
BrianH
26-Nov-2010
[6267]
APPEND object! word! does the same thing.
Kaj
26-Nov-2010
[6268]
Well, I don't think REBOL suddenly has the speed of C
BrianH
26-Nov-2010
[6269]
Certainly not for C-like code, agreed. But if you are writing REBOL-like 
code then you are mostly calling fairly complex functions that are 
written in very good C, functions like PARSE and APPEND. Most of 
the code you run in REBOL is actually implemented in C, including 
the interptreter itself. So in that case REBOL has the speed of the 
C that it is written in.
Kaj
26-Nov-2010
[6270x2]
I just feel you're overdoing it a bit. If you were to follow that 
reasoning through, the conclusion would have to be that REBOL has 
negligable overhead over C. Indeed, that's what some people claim 
on the web because they heard it somewhere in the community, and 
that's how REBOL people become known to the outside world as lying 
or delusional
Among dynamic languages, REBOL is slow. Not the slowest, but slow. 
Not that this matters in practice, but that's a whole other set of 
myths
BrianH
26-Nov-2010
[6272]
Perl can also have the same speed as C under similar circumstances. 
It all depends on how heavy the native functions that you are calling 
are. But unlike most dynamic languages, REBOL is optimized for hand-optimization 
and doesn't have a semantic model that even vaguely resembles that 
of C code, so you can't just transliterate code from another language 
to REBOL and expect to get the same performance. REBOL is *really* 
slow for the type of code that requires a compiler to be efficient, 
but it can be *really* fast at the kind of code that other languages 
have a lot of trouble dealing with at all because they have to manually 
implement a lot of stuff that REBOL has built-in in native code.
Kaj
26-Nov-2010
[6273]
I understand your reasoning, but it all comes down to whether you 
want to be perceived by the outside world as delusional or not
BrianH
26-Nov-2010
[6274]
It is not a myth that REBOL is fast. It can sometimes be misunderstood 
though, because people who are familiar with other languages will 
try to write code in the style of those other languages and expect 
it to be fast. REBOL code written in REBOL style for tasks that REBOL 
is suited for can be very fast.
Kaj
26-Nov-2010
[6275]
Very fast doesn't mean anything. For the outside world it does, because 
they're comparing it to something
BrianH
26-Nov-2010
[6276x2]
And for other tasks, we have extensions.
I think that the outside world would consider me delusional for choosing 
a language that isn't compiled, or (worst sin of all) doesn't have 
a C-like syntax. Most people think hand-optimization is insane in 
the modern world. And in some circumstances (and for many programmers) 
it is a bit crazy. But when you already can hand-optimize and understand 
the semantic model of the language you are using, and how it is different 
from other languages, then it's not so crazy. Differences matter. 
And I only use REBOL where it is appropriate, at least in comparison 
to the overhead of learning another tool.
Kaj
26-Nov-2010
[6278x2]
Rather than focusing on the details and claiming REBOL is fast in 
details, I would typify REBOL as a slow language that allows you 
to write fast algorithms
Algorithms dominate speed over details in almost all programs
BrianH
26-Nov-2010
[6280]
It goes the other way too: If you try to write REBOL-like code in 
most other languages then you will run into a wall. For most you 
will need to reimplement most of the natives before you can start, 
if it is possible to do REBOL-like code at all (often not with parsers). 
And when you do manage to get that code running it is often slower 
than REBOL code because of the optimization its natives have gone 
through.
Kaj
26-Nov-2010
[6281]
Yes, Greenspun's tenth rule
BrianH
26-Nov-2010
[6282]
There are some exceptions to this, languages that have comparable 
levels of built-in functionality, or more: Common Lisp, or Perl 6 
for instance.
Kaj
26-Nov-2010
[6283]
Well, then we're doomed against Perl 6 :-)
BrianH
26-Nov-2010
[6284]
PARSE rules are comparable to Perl 6 rules, both in speed and functionality. 
There are tricks you can do in either that you can't do in the other, 
or in some cases not easily. R2's PARSE outdid Perl 5's regexes, 
but R3 had to drastically update PARSE to catch up with Perl 6's 
rules.
Kaj
26-Nov-2010
[6285]
I never heard Carl mentioning that as the objective :-)
BrianH
26-Nov-2010
[6286x3]
I was the one who managed the parse proposals project - it was *my* 
objective.
There are tricks that you can do with dynamic rules that are provably 
impossible for static rules to do (patterns that static rules can't 
recognize). PARSE is a superset of the PEG model, while Perl 6 rules 
are a superset of recursive descent (LL), and there are patterns 
that LL can't handle that PEG can.
PARSE is dynamic PEG - there's nothing else like it (known to Wikipedia 
at least).
Kaj
26-Nov-2010
[6289]
There's a lot in this world that is kept from Wikipedia...
BrianH
26-Nov-2010
[6290x2]
Nothing publically released in even commercial form then.
There are a lot of tricks that you can do in REBOL's DO dialect as 
well that can't be replicated in a compiled language without using 
self-modifying code. And vice-versa as well. Tradeoffs. But once 
you choose interpretation then you can optimize the language semantics 
to make that really efficient. That is why half of what a compiler 
does is done by LOAD, and optimized REBOL code looks a lot like what 
the other half of a compiler does. That is why REBOL DO is more comparable 
to one of Scheme's macro systems than it is to Scheme itself.
Kaj
26-Nov-2010
[6292]
Exactly: REBOL's tradeoff is that it's a slow language that allows 
you to write fast programs
BrianH
26-Nov-2010
[6293]
R3's DO dialect can be slower than compiled code for certain code 
patterns, but faster for others, depending on the compiler and language 
you are comparing it to (many dynamic languages were slower for a 
lot of code until recently). But you can make things a lot faster 
if you stop thinking of REBOL as being a language, and start thinking 
of it as being a library of native functions and datatypes, with 
a variety of high-level scripting languges built in to script that 
library, and some built-in functions written in those scripting languages. 
Plus you can add your own libraries and scripting languages if you 
like. Looking at things that way is the first step to becoming good 
at hand-optimizing REBOL.
Ladislav
26-Nov-2010
[6294x3]
R3 had to drastically update PARSE to catch up with Perl 6's rules.
 - that is not an objective, that is a statement, as I see it
such (or worse) problems I have with almost everything stated above
and, btw, no update to PARSE was "drastic" in fact
BrianH
26-Nov-2010
[6297x3]
That's true, "drastic" is a bit of an overstatement. Almost(?) everything 
you can do in R3's PARSE you could do in R2 with dynamic parse tricks, 
DO dialect tricks or a preprocessor. But for the average PARSE user 
who can't understand the advanced workarounds the new capabilities 
are just that: new. If you look at idiomatic R3 rules compared to 
their R2 equivalents then the changes at least *look* drastic. Certainly 
different enough that for most people PARSE is quite a bit more powerful.
And it was an objective, from way back when we did the first round 
of PARSE proposals. At the time I had been following the development 
of Perl 6, especially their rules enhancements. Some of the PARSE 
proposals were based on trying to catch up with or show up Perl 6, 
and others came from tricks that other parser generators like ANTLR 
could do (like IF). To be fair, it was an unstated objective, at 
least on the pages. Peta had different objectives of course, like 
better matching the PEG theoretical model, and those were also good.
I'm sure that you had other objectives as well when you got involved 
in the most recent parse project 6 or 7 months after it started. 
Were you involved in the first round of parse proposals about 6 years 
ago? I remember Gabriele making a page for them after they had been 
discussed for a while, but not whether the initial discussions were 
here or on the mailing list.