r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Rebol School] Rebol School

denismx
19-Apr-2006
[256x2]
In fact, in any programming language, code is just data that is executable. 
Some languages allow that the code-data be processed as any other 
date. Rebol is not the only one. And I do not believe that this is 
it's main characteristic. The fundamental characteristic of Rebol 
is that it is a language for exchanging data over networks, be it 
information (data) or programs (code) so that is can be used and 
executed (if code is passed) on any computer connected to the network.
For this to be possible, the language needs to be able to interpret 
new code passed to it, naturally.
JaimeVargas
19-Apr-2006
[258]
Yes denismx, but that is not the only approach possible. Erlang passes 
byte code, and it is very good at distributed computing, same with 
Termite.
denismx
19-Apr-2006
[259x2]
What is amazing is that the interpreter is so small and yet permits 
so much.
Of course...
JaimeVargas
19-Apr-2006
[261]
It has to do with the careful mapping of datatypes to literals.
denismx
19-Apr-2006
[262]
The small footprint, u mean?
JaimeVargas
19-Apr-2006
[263]
That is a forte of Rebol, it avoids data prickling or serialization 
which it is require in other languages.
denismx
19-Apr-2006
[264]
ic
JaimeVargas
19-Apr-2006
[265x2]
Regarding small footprint I think this is just proper coding, and 
avoidance of bloat.
mzscheme interpreter is only 300K in my system. So it is not unheard 
off.
denismx
19-Apr-2006
[267x3]
I'm sure there is a lot of that. But then again, 256K for the core 
seems very small.
ic
and Basic was pretty small too. Guess I'm getting to used to bloated 
stuff with the years :-)
JaimeVargas
19-Apr-2006
[270x4]
Yes.
Most interpreter machines are small. What makes the big is all the 
libraries and IDEs that they add to them.
Going home now. Keep enjoing rebol.
Before I go this is the shortest intro to scheme and functional programming 
that I had found. It will get you up to speed in this model in one 
day http://www.ccs.neu.edu/home/dorai/t-y-scheme/t-y-scheme.html
denismx
19-Apr-2006
[274]
I will look into it Jaime. Tks. Although I am doubtfull my solution 
to devising a "better" way to teach Rebol is in getting a better 
mastery of functional programming, I may be wrong. So I'll follow 
your lead.
JaimeVargas
20-Apr-2006
[275]
Functional programming demystifies a lot imho.
denismx
21-Apr-2006
[276x3]
I'm sure it does, but my impression is that I don't have any problem 
with that concept. I programmed in Logo and Prolog (for teaching 
purposes, not commercialy). The idea that I can build Rebol statements 
in blocks and evaluate them, all at runtime, does not phase me. But 
I'm always willing to learn more of anything. It never hurts (much).
The question I am asking myself now, in my exploration of Rebol, 
is: What is the smallest subset of predefined Rebol words that will 
empower a student to build significant small applications.
If this set is small enough (400 words is way to large), say 15 to 
30 words, then this would be a good starting point for teaching purposes 
(18-20 year olds with no previous experience in programming).
Anton
21-Apr-2006
[279]
That leads me to wonder if I could produce a histogram of all the 
rebol words in my codebase. But "rebol words" is kind of hard to 
define, so it would not give a precise result. I think individual 
frequency analysis of some actual rebol apps would lead to a nice 
collection of functions.
[unknown: 9]
21-Apr-2006
[280]
Then a genus tree, since many words are just subtle variations.
Maxim
21-Apr-2006
[281]
ball part figure, I'd say basic I/O and core series handling.
Allen
22-Apr-2006
[282]
Rather than choosing a subset of words to learn first, choose the 
task instead, the required subset will then be fairly obvious.
Anton
22-Apr-2006
[283x11]
That's definitely true, but I see value in trying to determine which 
functions are used the most often, to teach those first.
Well, I've just manually extracted the rebol functions from my latest 
script demo-virtual-face.r (as posted in the View group), so I'm 
looking at those. I've excluded layout and draw dialect keywords. 
The order in which the functions appear is interesting. I have some 
duplicates. So now I'm analysing..
Also it's clear to me that the importance of a function is not always 
related strongly to it's frequency of use. Take VIEW for example, 
not used that much, compared to other functions, but without it you 
cannot open a window ! (You can, actually, in other ways, but VIEW 
does a lot of work. Mmm... another way to assess importance of a 
function, the length of its source ?)
Ok, so here's my frequency table:
    6 compose 
    5 as-pair 
    5 func 
    4 do 
    3 show 
    2 all 
    2 copy 
    2 find 
    2 form 
    2 get 
    2 in 
    2 pick 
    2 print 
    2 to-image 
    2 use 
    1 * 
    1 + 
    1 - 
    1 <> 
    1 = 
    1 append 
    1 bind 
    1 center-face 
    1 change 
    1 clear 
    1 context 
    1 do-events 
    1 either 
    1 first 
    1 foreach 
    1 if 
    1 join 
    1 layout 
    1 load-thru 
    1 make 
    1 mold 
    1 object? 
    1 reduce 
    1 remold 
    1 remove-each 
    1 repeat 
    1 second 
    1 select 
    1 to-pair 
    1 to-path 
    1 view
To create the above list, I just read my source script file and wrote 
each word as I came across it, manually, into a new script file. 
Then I ran the following code:
blk: [
* 
+ 
- 
<> 
= 
all 
all 
append 
as-pair 
as-pair 
....
]
blk: sort blk
ublk: unique blk
hist: copy []
foreach word ublk [
	count: 0 forall blk [if blk/1 = word [count: count + 1]]
	repend hist [count word]
]
new-line/all/skip hist on 2
sort/reverse/skip hist 2
write clipboard:// mold hist
Gosh there are so many ways to analyse !  There's also the issue 
of how often some functions are called, not just how often they are 
written. For example: I have not included the names of my own functions, 
whose source is written once, but used many times, effectively hiding 
the importance of the words used inside from this analysis technique.
I've added another much longer file, so my frequency table now looks 
like this:
47 if 
    35 all 
    17 func 
    14 find 
    13 in 
    13 not 
    13 print 
    12 do 
    12 either 
    12 get 
    10 = 
    10 next 
    9 clear 
    9 exit 
    9 insert 
    9 pick 
    8 compose 
    7 any 
    6 foreach 
    6 mold 
    6 tail? 
    5 - 
    5 as-pair 
    5 last 
    5 none? 
    5 object? 
    5 paren? 
    4 head 
    4 reduce 
    4 show 
    4 while 
    3 break 
    3 copy 
    3 remold 
    3 remove 
    3 same? 
    3 tail 
    3 use 
    2 * 
    2 + 
    2 <> 
    2 context 
    2 forall 
    2 form 
    2 make 
    2 prin 
    2 return 
    2 set 
    2 to-image 
    2 to-time 
    2 view
Such analysis as this also ignores the interesting ways that words 
are related in patterns of usage. (eg. [get in] is used quite often)
Anyway, I hope the above list can help to get a rough idea of which 
functions should be studied first.
Volker
22-Apr-2006
[294]
- maybe examine multiple scripts, and count in how many a word is? 
Then 'view would count high, even if used once. Words occuring in 
every script are important.

- "choose the task instead" - good idea. Make a list of tasks and 
list required   words. could be in that 15-30-range
Maxim
22-Apr-2006
[295x2]
the better way to gauge useage is not by frequency within one script 
but by useage amongst many scripts... where useage within that script 
many times still only counts as one.  I'd use the rebol.org site 
to scan scripts from any given group and put usage from them.  Thus 
networking would score view as almost 0 where gui would place view 
as the most used word (in every script)
and using rebol.org you could even classify by advanced, and separate 
words being used more often in 'advanced or intermediate work.
Anton
22-Apr-2006
[297]
That would improve things I think.
[unknown: 9]
22-Apr-2006
[298]
Also it's clear to me that the importance of a function is not always 
related strongly to it's frequency of use.

This is why you need the Genus.


View would be a root, while many string commands would be all gathered 
at the end of some branch.  This becomes really easy to see when 
you present the data that way.
Anton
22-Apr-2006
[299]
It's not so simple, because there are many ways to hierarchically 
arrange the words, looking at them from different aspects.
Maxim
22-Apr-2006
[300x3]
then its a good test for the associative DB I am about to start working 
on.  :-)
I was looking for a good and simple data set to organise.
(I have been REBOLing full time for a while and yess many things 
are moving ahead :-)
Anton
22-Apr-2006
[303]
You could make trees where the grouping is by the words that follow 
or precede a word. Expressions, basically :-)
Maxim
22-Apr-2006
[304x2]
we can have each atom of information relate to any other atom based 
on rules we define... once the dataset is parsed, then you end up 
with a complex modeling of all related data which you can query and 
search trough quite quickly.
I am trying not to open too much on many of my current projects... 
but this is going to be co-developped while I'm doing liquid.