Mailing List Archive: 49091 messages
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

[REBOL] Re: Big parse rules

From: brett:codeconscious at: 20-Jul-2003 18:40

Hi Martin, I'm not entirely sure how you want to partition your function's logic, however I do have a couple of suggestions but no idea if they will be useful for you. It does seem though that you need a way to "contextualise" your sectional rules. 1) Volker mentioned objects. One program I made recently had all the rules together in a file call it eg. %lexical.r Some of these rules had variable names to set and names of functions to call from inside parens. Ie the variables are state variables and the functions are equivalent to scanner events. I then "late bound" the lexcial rules to the scanner like this... scanner: copy [ tkType: tkStart: tkEnd: tkBeginLine: tkEndLine: tkEmit: none tkResult: copy [] tkEmit: does [do-body tkType tkStart subtract index? tkEnd index? tkStart] ] scanner: make context scanner load %lexical.r ...where tkType, tkStart etc are words embedded in the rules stored in %lexical. The scanner was was created inside a function to keep things tidy. In this way I could possibly use different generated scanners to achieve different purposes while reusing the same lexical rules. 2) functions I'm interested in the way code/data can be exploited by converting it into a function eg this block stores a tree sample-tree: [node 1 [node 2 [] node 3 []] node 4 []] it can be transformed into a function body and each tree node evaluated by doing something like eval-tree: func [node] sample-tree eval-tree func [label subtrees] [probe label reduce subtrees] Hope one of these ideas helps! Regards, Brett.