AltME groups: search
Help · search scripts · search articles · search mailing listresults summary
world | hits |
r4wp | 8 |
r3wp | 152 |
total: | 160 |
results window for this page: [start: 1 end: 100]
world-name: r4wp
Group: Ann-Reply ... Reply to Announce group [web-public] | ||
BrianH: 26-Sep-2012 | Here is the FSF FAQ entry relating to interpreters and their libraries: http://www.gnu.org/licenses/gpl-faq.html#IfInterpreterIsGPL Pretty much the whole entry is applicable. The first paragraph would apply to data passed to DO, PARSE, DELECT, DO-COMMANDS, or other dialect processors. The second paragraph would definitely apply to extensions, and could apply to built-in functions unless we get an exception like GCC's; or we could get a FAQ entry declaring that the functions built into R3 are "part of the interpreter" rather than "library code", despite R3's actual system model. Note that PARSE's built-in operations are more unambiguously "part of the interpreter", and the same could be said for other similar dialects. The last two paragraphs apply to mezzanine code and embedded modules. If they are GPL'd and your code uses them, it would be affected. | |
Group: Rebol School ... REBOL School [web-public] | ||
Gregg: 24-Apr-2012 | parse-int-values: func [ "Parses and returns integer values, each <n> chars long in a string." input [any-string!] spec [block!] "Dialected block of commands: <n>, skip <n>, done, char, or string" /local gen'd-rules ; generated rules result ; what we return to the caller emit emit-data-rule emit-skip-rule emit-literal-rule emit-data digit= n= literal= int-rule= skip-rule= literal-rule= done= build-rule= data-rule skip-rule ][ ; This is where we put the rules we build; our gernated parse rules. gen'd-rules: copy [] ; This is where we put the integer results result: copy [] ; helper functions emit: func [rule n] [append gen'd-rules replace copy rule 'n n] emit-data-rule: func [n] [emit data-rule n] emit-skip-rule: func [n] [emit skip-rule n] emit-literal-rule: func [value] [append gen'd-rules value] emit-data: does [append result to integer! =chars] ; Rule templates; used to generate rules ;data-rule: [copy =chars n digit= (append result to integer! =chars)] data-rule: [copy =chars n digit= (emit-data)] skip-rule: [n skip] ; helper parse rules digit=: charset [#"0" - #"9"] n=: [set n integer!] literal=: [set lit-val [char! | any-string!]] ; Rule generation helper parse rules int-rule=: [n= (emit-data-rule n)] skip-rule=: ['skip n= (emit-skip-rule n)] literal-rule=: [literal= (emit-literal-rule lit-val)] done=: ['done (append gen'd-rules [to end])] ; This generates the parse rules used against the input build-rule=: [some [skip-rule= | int-rule= | literal-rule=] opt done=] ; We parse the spec they give us, and use that to generate the ; parse rules used against the actual input. If the spec parse ; fails, we return none (maybe we should throw an error though); ; if the data parse fails, we return false; otherwise they get ; back a block of integers. Have to decide what to do if they ; give us negative numbers as well. either parse spec build-rule= [ either parse input gen'd-rules [result] [false] ] [none] ] | |
BrianH: 8-Aug-2012 | rle2: funct ["Run length encode" b [series!]] [ output: copy [] x: none r: either any-block? :b [qr: copy [quote 1] [(qr/2: :x) any qr]] [[any x]] parse/case :b [any [pos1: set x skip r pos2: ( reduce/into [subtract index? :pos2 index? :pos1 :x] tail output )]] output ] >> rle2 [a a A b b c d D d d d] == [2 a 1 A 2 b 1 c 1 d 1 D 3 d] | |
DocKimbel: 8-Aug-2012 | Here's a R2 solution with same rules for string! and block! series: rle: func [s [series!] /local out c i][ out: make block! 1 parse/case/all s [ any [ [end | c: ( c: either word? c/1 [to-lit-word c/1][c/1] i: 1 )] skip some [ c (i: i + 1) | (repend out [i c]) break ] ] ] out ] >> rle "aaabbcx" == [3 #"a" 2 #"b" 1 #"c" 1 #"x"] >> rle [a a a a a] == [5 a] >> rle [a a a a a b b] == [5 a 2 b] >> rle [a a A b b c d D d d d] == [3 a 2 b 1 c 5 d] | |
BrianH: 14-Aug-2012 | I'd love to see the Topaz PARSE enhancements in Red too :) | |
Gregg: 28-May-2013 | parse-int-values: func [ "Parses and returns integer values, each <n> chars long in a string." input [any-string!] spec [block!] "Dialected block of commands: <n>, skip <n>, done, char, or string" /local gen'd-rules ; generated rules result ; what we return to the caller emit emit-data-rule emit-skip-rule emit-literal-rule emit-data digit= n= literal= int-rule= skip-rule= literal-rule= done= build-rule= data-rule skip-rule ][ ; This is where we put the rules we build; our gernated parse rules. gen'd-rules: copy [] ; This is where we put the integer results result: copy [] ; helper functions emit: func [rule n] [append gen'd-rules replace copy rule 'n n] emit-data-rule: func [n] [emit data-rule n] emit-skip-rule: func [n] [emit skip-rule n] emit-literal-rule: func [value] [append gen'd-rules value] emit-data: does [append result to integer! =chars] ; Rule templates; used to generate rules ;data-rule: [copy =chars n digit= (append result to integer! =chars)] data-rule: [copy =chars n digit= (emit-data)] skip-rule: [n skip] ; helper parse rules digit=: charset [#"0" - #"9"] n=: [set n integer!] literal=: [set lit-val [char! | any-string!]] ; Rule generation helper parse rules int-rule=: [n= (emit-data-rule n)] skip-rule=: ['skip n= (emit-skip-rule n)] literal-rule=: [literal= (emit-literal-rule lit-val)] done=: ['done (append gen'd-rules [to end])] ; This generates the parse rules used against the input build-rule=: [some [skip-rule= | int-rule= | literal-rule=] opt done=] ; We parse the spec they give us, and use that to generate the ; parse rules used against the actual input. If the spec parse ; fails, we return none (maybe we should throw an error though); ; if the data parse fails, we return false; otherwise they get ; back a block of integers. Have to decide what to do if they ; give us negative numbers as well. either parse spec build-rule= [ either parse input gen'd-rules [result] [false] ] [none] ] | |
Group: !REBOL3 ... General discussion about REBOL 3 [web-public] | ||
Andreas: 25-Feb-2013 | I discovered some interesting PARSE functionality, which I have not known about before. TO and THRU with integer arguments seem to do absolute positioning: >> parse "abcd" ["abc" to 2 "bcd"] == true Anyone seen this before? I added a CC ticket as a reminder to document it (http://issue.cc/r3/1964) -- if anyone knows about a place where this is documented already, I'd be happy about a pointer. | |
Maxim: 2-Apr-2013 | if it where a generic string handling function I'd agree with you... but its not... it has added meaning, it splits filesystem paths. its not just a string. if it where, I'd use parse or some tokenize func. I see absolutely no merit in trying to make split-path act like a generic string handling func. the point of the func is to separate folder and file into two parts. to me it comes down to either you decide that when there is no data you invent a default, or use the internal one which is none, which works well with soooo many other funcs. if there is no directory part in the path, do not try to find a suitable value for it... there is none... funny, even when trying to explain my point of view, the actual sentence reads almost like a line of rebol source. :-) |
world-name: r3wp
Group: All ... except covered in other channels [web-public] | ||
Maxim: 5-May-2006 | Can I vote in r3 to add to-any.. which stops at the first matching rules in the order of the block being parsed, as opposed to the order in the parse rules This would make many rules simpler or make parse easier to use in Q&D stuff. | |
Gabriele: 5-Sep-2006 | i'd actually implement switch using parse. i find it very useful to be able to specify multiple values for the same block. | |
btiffin: 9-Jan-2009 | Umm, yes and no on the fear. Yes fear kept me from holding a lecture on the subject, but I usually PARSE from reading the pretty print code. That's how my locate utility works. But I'm not concerned with us. I'm concerned with construction bosses and non-tech professors having access to a programming language and learning maybe one or two tricks a week. I'm also on the side of the gurus in terms of correctness and concise coding, but I'd like to see REBOL ,the system, that out of the box would be a robust battle tank. Add taint to the fuel, it would still function; perhaps not gracefully, but the big guns would still fire. Today, the slighest spoonful of sugar and our tank dies on the field, no movement, no guns. foreign! and Steeve's suggestion of scan till whitespace (and yes, some source code would load as almost all completely foreign! gibberish if a quote was out of place, but so, we can take that and fix it). But at least REBOL wouldn't die; the data/code would be loaded and inspectable. And yes, this could lead to the odd rare catastrophic failure, but we get that potential with "clean" datatype! scripts too. I think the slight increased risk is worth the new group of users this could attract. | |
Chris: 1-Apr-2009 | I'd say there is a case for adapting Rebol's vocabulary, eg: measure! - proposed a long time ago - 2cm 3.4cl 5o (degrees) 1em - found elsewhere, eg CSS date! - recognize some common alternate constructs - 12-Mar-2009T04:00 money! - the suggested: $1,000.00 I'd love to see Rebol mature along these lines. The literal types are the essense of Rebol's being, they make for expressive problem solving and efficient data exchange with some resemblence to terms we would use on paper - all with 'load as its core arbiter. It'd be great to be able to extract meaning from any stream of data, and I think if any language can, it's Rebol - however, it just seems beyond the scope of 'load which has this specifically and valuably defined purpose. Whereas 'parse can be used to describe anything! - even if you load junk!, you're still going to need 'parse to make sense of it... | |
Group: RAMBO ... The REBOL bug and enhancement database [web-public] | ||
sqlab: 1-Dec-2005 | parse "12345" [copy a to 2 copy b to 3 copy c to 4 copy d to 5 copy e to 6] == true >> a == "1" >> b == "2" >> c == "3" >> d == "4" >> e == "5" | |
Group: Core ... Discuss core issues [web-public] | ||
Sunanda: 30-Dec-2004 | You'd probably need parse if the item was more complex. I think parse would be overkil here. | |
eFishAnt: 13-Jan-2005 | if that is the struct of the data stored in a file, it should not be too hard to parse the file to get the information...looks like a 3-D rendering file or ? | |
Graham: 22-Mar-2005 | Is it possible to set the modification date on a directory? I keep getting errors whereas it works with files in win32. >> set-modes %xml-parse.r [ modification-date: 1-Jan-2005 ] >> set-modes %www/ [ modification-date: 1-Jan-2005 ] ** Access Error: Cannot open /D/rebol/rebXR/www/ ** Near: set-modes %www/ [modification-date: 1-Jan-2005] >> set-modes %www [ modification-date: 1-Jan-2005 ] ** Access Error: Cannot open /D/rebol/rebXR/www ** Near: set-modes %www [modification-date: 1-Jan-2005] >> | |
Brock: 4-May-2005 | mention in... parse test ...you should replace test with your d word. | |
Brock: 4-May-2005 | show: func [ d /local alpha ][ get-city: [thru "City:" copy city to "^/"] get-stateprov: [thru "Stateprov:" copy stateprov to "^/"] get-country: [thru "country:" copy country to "^/" to end] parse d [get-city get-stateprov get-country] print [ "City: " a ] print [ "StateProv: " b ] print [ "Country: " c ] ] | |
Brock: 4-May-2005 | If you didn't know the order of the data being provided to you then you could generalize the code even further... here are the two lines that would change.... get-country: [thru "country:" copy country to "^/"] ; remove "to end" parse d [any [get-city get-stateprov get-country] to end] ; added 'any block and "to end" | |
Brock: 4-May-2005 | ;here's a working show... but didn't easily come across a solution to allow for an unkown order of items to find show: func [ d /local alpha ][ get-city: [thru "City:" copy city to "^/"] get-stateprov: [thru "Stateprov:" copy stateprov to "^/"] get-country: [thru "country:" copy country to "^/"] parse d [get-city get-stateprov get-country to end] print [ "City:" tab trim city newline "Stateprov:" tab trim stateprov newline "Country:" tab trim country newline ] ] | |
Group: Script Library ... REBOL.org: Script library and Mailing list archive [web-public] | ||
Sunanda: 1-May-2007 | That's a nice idea for a sort of "REBOL explainer" application. But it would be difficult to do in the Library. The Library does attempt to load and parse scripts -- that's how we do the colorisation. But (as with Gabriele's code) we rely on REBOL's own reflective abilities to tell us what is a word, function, operator etc. The Library runs an old version of Core (and even if we update that, we'd never run a version of View on a webserver) so it does not have access to all the information a proper explainer.highlighter would need. Take this script for example: http://www.rebol.org/cgi-bin/cgiwrap/rebol/view-script.r?color=yes&script=reboldiff.r 'new-line is a valid REBOL word, but it is not colored: that's because it is not a word in the version we use. So sadly, the colorisation at REBOL.or remains a nice bit of eye candy rather than a solidly dependable feature. | |
Group: I'm new ... Ask any question, and a helpful person will try to answer. [web-public] | ||
RobertS: 14-Sep-2007 | I realized there was this traversal option using a lit-path! treated as a series! but it did not seem to if what I already had was a path! held by a word and I wanted to 'extend' that value with a word. This arises when the embedded word becomes bound to a different block. In that case an OBJECT! looks to be the only option but then the WORDSs in the PATH come already bound to values and so are not 'functors' as are 'a 'd and 'e in your example. I want to construct a resultant valid path! from a valid path! + a lit-word where that word has no value but serves only as functor. I had hoped that the func to-lit-path would be the answer, but I see now that the default Rebol DO path! evaluation precludes this kind of 'append'. I should be able to use a modified version of your eval-path func to take as args a valid path! and a word! My path idea is more like a 'tilde' than our '/' such that I can have ; blk/key~wrd1~wrd2~wrd3 ... ~wrd-n ; e.g., path~wrd1~wrd-i~wrd-j ~wrd-k ; becomes ; ... path2~wrd-m~wrd-n ; i.e., ; blk/key/putative-confirmed-key~wrd-m~wrd-n PARSE is likely part of the answer if I go that TILDE route. Once I have a lit-path! your eval-path is the traversal. A blk of args to a func such as construct_dpath: func [ dpath [lit-path!] functor-words-blk [block! ] /local v1 v2] [ should model my case OK and that dpath can be constructed by modified versions of your eval-path. Thanks | |
Janko: 8-Jan-2009 | == true >> parse "A.B!C.D." [ any [ [thru "." | thru "!" ] mark: (print mark ) ] ] B!C.D. D. >> parse "A.B!C.D." [ any [ [thru "!" | thru "." ] mark: (print mark ) ] ] C.D. D. --- in first case it skips the C in second it skips the B .. | |
Oldes: 10-Jan-2009 | str: "a.b.c.d!e?f. " chars: complement charset ".!?" >> parse str [any chars tmp: to end (uppercase tmp)] str == "a.B.C.D!E?F. " | |
Oldes: 10-Jan-2009 | >> parse str: "assd.asd!d" [any chars tmp: (uppercase tmp)] str == "assd.ASD!D" | |
Henrik: 17-Apr-2009 | the difference between using a set-word and SET word!: parse [a b c d] [ w1: word! (probe w1) w2: word! (probe w1 probe w2) set w3 word! (probe w1 probe w2 probe w3) w4: word! (probew1 probe w2 probe w3 probe w4/1) ] | |
Maxim: 14-May-2009 | so you'd just create a block before the parse, and dump the data which you want in there, using your new structure. | |
sqlab: 23-Jun-2009 | Maybe these are some variations of what you are looking for parse/all "fd doixx s x x x oie x } " [some [copy d "x" (print d) | skip]] parse/all "fd doixx s x x x oie x } " [some [copy d 1 2 "x" (print d) | skip]] parse/all "fd doixx s x x x oie x } " [some [copy d 2 "x" (print d) | skip]] parse/all "fd doixx s x x x oie x } " [some [copy d "xx" (print d) | skip]] parse/all "fd doixx s x x x oie x } " [some [[copy d "x" copy e "x" (print [e d]) ] | skip]] parse/all "fd doixx s x x x oie x } " [some [ (g: copy "" ) 2 [copy d "x" (append g d) ] (print g ) | skip]] | |
sqlab: 23-Jun-2009 | or you are looking for the pairs parse/all "fd doixx s x x x oie x } " [ some [ [ (g: copy "" ) 2 [ copy d "x" (append g d ) any notx | skip ] (if not empty? g [print g]) ] ] ] | |
sqlab: 23-Jun-2009 | I forgot notx notx: complement charset "x" parse/all "fd doixx s x x x oie x } " [ some [ (g: copy "" ) 2 [ copy d "x" (append g d ) any notx | skip ] (if not empty? g [print g]) ] ] | |
mhinson: 23-Jun-2009 | this is what I dont expect. parse/all "fd doixx s x x x oie x } " [some [copy d "x" (print d) | skip]] | |
BrianH: 23-Jun-2009 | >> parse/all { X X XX X X} [(prin 'a) some [(prin 'b) "X" (prin 'c) [(prin 'd) "X" (print 'e) | (prin 'f) skip (prin 'g)] (prin 'h) | (prin 'i) skip (prin 'j)] (prin 'k)] abijbcdfghbcdfghbijbcde hbijbcdfghbcdfijbik== true | |
sqlab: 24-Jun-2009 | regarding parse/all "fd doixx s x x x oie x } " [some [copy d "x" (print d) | skip]] what did you expect? If you know what you are looking for you can extend it to parse/all "fd doixx s x x x oie x } " [some [copy d ["x" | "y" | "z" ] (print d) | skip]] and you will get your searched values. But maybe I just don't understand the problem. | |
mhinson: 24-Jun-2009 | Right, I would say that the following snippit is the most educational thing I have done with PARSE. It shows me a lot of things about what is happening & validates the construction and use of charsets & whatever the 'address block is called. Thanks everyone for your help. digit: charset [#"0" - #"9"] address: [1 3 digit "." 1 3 digit "." 1 3 digit "." 1 3 digit] a: does [prin 'a] b: does [prin 'b] c: does [prin 'c] d: does [prin 'd] e: does [prin 'e] f: does [prin 'f] parse/all {1 23 4.5.6.12 222.1.1.1 7 8} [some[ (a) copy x address (prin x) some[ (b) copy y address break | skip (c)] (print y) | skip (d) ]] adadadadada4.5.6.12bcb222.1.1.1 | |
Endo: 1-Dec-2011 | I'm also working on very similar to your case right now. I don't know if its useful for you but here how I do (on Windows) command: {csvde -u -f export.ldap -d "ou=myou" -r "(objectClass=user)" -s 10.1.31.2 -a "" "" -l "DN,sn,uid,l,givenName,telephoneNumber,mail"} call/wait/console/shell/error command %export.err ;export all users, bind annonymous if 0 < get in info? %export.err 'size [print "error" editor %export.err halt] lines: read/lines %export.ldap ;create an object from the first line (field names, order may differ from what you give in the batch) ldap-object: construct append map-each v parse first content none [to-set-word v] 'none foreach line lines [ ( set words-of o: make ldap-object [] parse/all line {,} append users: [] o ) ] ;append all valid users as an object to a block probe users I hope it gives some idea. | |
Group: Parse ... Discussion of PARSE dialect [web-public] | ||
Romano: 30-Jan-2005 | 1.2.57 >> parse/all {"a""b""c"de} "e" == ["a" "b" "c" "d"] Please, add the bug to RAMBO. | |
Brett: 13-Mar-2005 | Graham, I'd probably use parse/all rather than parse. Also don't forget the parse-header function and all the associated bug fixing work related to it in view 1.3 project. May or may not be of use to you. | |
BrianH: 22-Aug-2005 | parse/all data [any [to "*" a: skip b: to "*" c: skip d: :a (change/part a rejoin ["<strong>" copy/part b c "</strong>"] d)] to end] | |
BrianH: 22-Aug-2005 | markup-chars: charset "*~" non-markup: complement markup-chars tag1: ["*" "<strong>" "~" "<i>"] tag2: ["*" "</strong>" "~" "</i>"] parse/all data [ any non-markup any [ ["*" a: skip b: to "*" c: skip d: | "~" a: skip b: to "~" c: skip d: ] :a ( change/part a rejoin [ select tag1 copy/part a b copy/part b c select tag2 copy/part c d ] d ) any non-markup ] to end ] | |
BrianH: 22-Aug-2005 | Here's a simplified version of my example that can handle multiple instances of multiple markup types and be adapted to different end tags (thanks Tomc for the idea!): markup-chars: charset "*~" non-markup: complement markup-chars tag1: ["*" "<strong>" "~" "<i>"] tag2: ["*" "</strong>" "~" "</i>"] parse/all data [ any non-markup any [ ; This next block can be generated if you have many markup types... [a: copy b "*" copy c to "*" copy d "*" e: | a: copy b "~" copy c to "~" copy d "~" e: ] :a (change/part a rejoin [tag1/:b c tag2/:d] e) any non-markup ] to end ] | |
BrianW: 22-Aug-2005 | Here's what I have right now: markup-chars: charset "*_@" non-markup: complement markup-chars inline-tags: [ "*" "strong" "_" "em" "@" "code" ] markup-rule: [ any non-markup any [ [ a: "*" b: to "*" c: skip d: | a: "_" b: to "_" c: skip d: | a: "@" b: to "@" c: skip d: ] :a ( change/part a rejoin [ "<" select inline-tags copy/part a b ">" copy/part b c "</" select inline-tags copy/part a b ">" ] d ) any non-markup ] to end ] parse text markup-rule | |
BrianW: 22-Aug-2005 | okay, here's a slightly tweaked version that uses a multichar markup tag: markup-chars: charset "[*_-:---]" non-markup: complement markup-chars inline-tags: [ "*" "strong" "_" "em" "@" "code" "--" "small" ] markup-rule: [ any non-markup any [ [ a: "*" b: to "*" c: skip d: | a: "_" b: to "_" c: skip d: | a: "@" b: to "@" c: skip d: | a: "--" b: to "--" c: skip skip d: ] :a ( change/part a rejoin [ "<" select inline-tags copy/part a b ">" copy/part b c "</" select inline-tags copy/part a b ">" ] d ) any non-markup | skip ] to end ] parse/all text markup-rule | |
MichaelB: 23-Oct-2005 | I just found out that I can't do the following: s: "a b c" s: "a c b" parse s ["a" to ["b" | "c"] to end] The two strings should only symbolize that b and c can alternate. But 'to and 'thru don't work with subrules. It's not even stated in the documentation that it should but wouldn't it be natural ? Or am I missing some complication for the parser if it would support this (in the general case indefinite look-ahead necessary for the parser - is this the problem?) ? How are other people doing things like this - what if you want to parse something like "a bla bla bla c" or "a bla bla bla d" if you are interested in the "bla bla bla" which might be arbitrary text and thus can't be put into rules ? | |
Graham: 4-Nov-2005 | I used to parse HL7 messages differently ... splitting them into fields as well. But this time I thought I 'd try a rule based approach. | |
Sunanda: 12-Jan-2006 | It'd be fun to compare parse and REs..... Maybe a shootout between experts in both. Both sides could learn a lot. | |
Oldes: 14-Mar-2006 | but the true is, that in CSV is logical to have: parse {,d ,d} {,} == ["" "d" "d"] | |
Oldes: 14-Mar-2006 | and parse {,"a b, d" ,d} {,} == ["" "a b, d" "d"] (so probably Carl has true;-) | |
Sunanda: 28-Apr-2006 | II was sure I'd posted this just after Oldes' message.....But it ain't there now.....Maybe it's in the wrong group) Andrew has a nice starter set: http://www.rebol.org/cgi-bin/cgiwrap/rebol/view-script.r?script=common-parse-values.r And I know he has extended that list extensively to include things like email address and URL | |
Gordon: 29-Jun-2006 | I'm a bit stuck because this parse stop after the first iteration. Can anyone give me a hint as to why it stops after one line. Here is some code: data: read to-file Readfile print length? data 224921 d: parse/all data [thru QuoteStr copy Note to QuoteStr thru QuoteStr thru quotestr copy Category to QuoteStr thru QuoteStr thru quotestr copy Flag to QuoteStr thru newline (print index? data)] 1 == false Data contains hundreds of "memos" in a csv file with three fields: Memo, Category and Flag ("0"|"1") all fileds are enclosed in quotes and separated by commas. It would be real simple if the Memo field didn't contain double quoted words; then parse data none would even work; but alas many memos contain other "words". It would even be simple if the memos didn't contain commas, then parse data "," or parse/all data "," would work; but alas many memos contain commas in the body. | |
Izkata: 29-Jun-2006 | if QuoteStr = "\"", then this looks like it to me: Note , "Category", "Flag" Note , "Category", "Flag" But you don't have a loop or anything - try this: d: parse/all data [ some [ thru QuoteStr copy Note to QuoteStr thru QuoteStr thru quotestr copy Category to QuoteStr thru QuoteStr thru quotestr copy Flag to QuoteStr thru newline (print index? data) ] ] | |
Izkata: 29-Jun-2006 | This change in the parse looks like it works: >> data: {"Note", "Category", "Flag" { "Note", "Category", "Flag" { "Note", "Category", "Flag" { "Note", "Category", "Flag" { } == {"Note", "Category", "Flag" Note , "Category", "Flag" Note , "Category", "Flag" Note , "Category", "Flag" } >> QuoteStr: to-char 34 == #"^"" >> d: parse/all data [ [ some [ [ X: thru QuoteStr copy Note to QuoteStr thru QuoteStr thru quotestr [ copy Category to QuoteStr thru QuoteStr thru quotestr copy Flag to QuoteStr [ thru newline (print index? :X) [ ] [ ] 1 29 57 85 == true | |
Oldes: 3-Oct-2006 | maybe this will help: x: [1 2 3 4 5] parse x [any [x: set d number! (probe x probe d x: next x) :x]] | |
Maxim: 13-Apr-2007 | that is what I meant... I'd like parse to do it for us . | |
Gabriele: 8-Jun-2007 | and actually... i'd call parse directly in that case ;) | |
btiffin: 24-Jan-2008 | I'm pondering attempting a PARSE lecture here on Altme; It'd be run twice, 9am EST, 9pm EST (or somesuch) Topic would be dialecting. I want to see if it would work, but I'm no where near a professor level rebol. So, think of it as a kindergarten lecture, as a trial. Plan; Post this message - see if there is feedback. Allow for some Q&A time for specific topics of interest. A week or two later, run a hour (probably less) of monologue (interruptions allowed for stuff that is just plain wrong ... but other than that participants would be asked to hold off on questions). Followed immediately with a Q&A, complaint, correction session. Then a DocBase page created with a merged transcript of the two timezoned lectures, things learned and hopefully something along the lines of a simple file management (or some such) dialect source code file. R2 related - for me the R3 DELECT still hasn't sunk in. If it works, then perhaps it could become a semi-regular activity...there is going to be a lot to discuss come "link to the rebol.dll" time. | |
PatrickP61: 23-Feb-2008 | I have a question on the above parse by Oldes on Feb 8th. If you feed in a [a b c d e f] you will get a-b-c-==false How can you change the parse so that it will put a dash in between all characters, without defining each character? | |
BrianH: 23-Feb-2008 | Patrick, in answer to your first question: parse [a b c d e f] [ set x word! (prin form x) any [set x word! (prin join "-" form x)] ] | |
btiffin: 21-Aug-2008 | A long time ago, I offered to try a lecture. Don't feel worthy. So I thought I'd throw out a few (mis)understandings and have them corrected to build up a level of comfort that I wouldn't be leading a group of high potential rebols down a garden path. So; one of the critical mistakes in PARSE can be remembered as "so many", or a butchery of some [ any [ , so many. some asks for a truth among alternatives and any say's "yep, got zero of the thing I was looking for", but doesn't consume anything. SOME says, great and then asks for a truth. ANY say "yep, got zero of the thing I was looking for", and still doesn't move, ready to answer yes to every question SOME can ask. An infinite PARSE loop. Aside: to protect against infinite loops always start a fresh PARSE block with [() the "immediate block" of the paren! will allow for a keyboard escape, and not the more drastic Ctrl-C. So, I'd like to ask the audience; what other PARSE command sequences can cause infinite loops? end? and is it only "end", "to end" but "thru end" will alleviate that one? end end end end being true? >> parse "" [some [() end end end]] (escape) >> parse "" [some [() thru end end end]] == false >> parse "" [some [() to end end end]] (escape) >> Ok, but thru end is false. Is there an idiom to avoid looping on end, but still being true on the first hit? Other trip ups? | |
Anton: 10-Oct-2008 | term: [word! | into term] parse [a b [c]] [some term] ;== true parse [a b [c d]] [some term] ;== false | |
Anton: 10-Oct-2008 | terms: [some [word! | into terms]] parse [a b [c d]] terms ;== true | |
Anton: 5-Nov-2008 | Peter's example, from the blog: parse [a b c d] [ any [ start (acc: 0) | set inc integer! (acc: acc + inc) | end ] ] | |
Sunanda: 6-Nov-2008 | My suggested improvement to parse would be a trace (or debug) refinement: trace-output-word: copy [] parse/trace string rules trace-output-word I'm not entirely sure how it would work. That would depend in part on how parse works internally, and so what trace points are possible. But, as a minimum, I'd expect it to show me each rule that triggers a match, and the current position of the string being parsed. parse would append trace info to the trace-output word Otherwise, parse is too big a black box for any one other than very patient experts. | |
BrianH: 6-Nov-2008 | Here's an example of what you could do with the PARSE proposals: use [r d f] [ ; External words from standard USE statement parse f: read d: %./ r: [ use [d1 f p] [ ; These words override the outer words any [ ; Check for directory filename (d1: d) ; This maintains a recursive directory stack p: ; Save the position change [ ; This rule must be matched before the change happens ; Set f to the filename if it is a directory else fail set f into file! [to end reverse "/" to end] ; f is a directory filename, so process it ( d: join d f ; Add the directory name to the current path f: read d ; Read the directory into a block ) ; f is now a block of filenames. ] f ; The file is now the block read above :p ; Go back to the saved position into block! r ; Now recurse into the new block (d: d1) ; Pop the directory stack ; Otherwise backtrack and skip | skip ] ; end any ] ; end use ] ; end parse f ; This is the expanded directory block ] | |
BrianH: 6-Nov-2008 | Here's an revised version with more of the PARSE proposals: use [r d res] [ ; External words from standard USE statement parse res: read d: %./ r: [ use [ds f] [ ; These words override the outer words any [ ; Check for directory filename (ds: d) ; This maintains a recursive directory stack [ ; Save the position through alternation change [ ; This rule must be matched before the change happens ; Set f to the filename if it is a directory else fail set f into file! [to end reverse "/" to end] ; f is a directory filename, so process it ( d: join d f ; Add the directory name to the current path f: read d ; Read the directory into a block ) ; f is now a block of filenames. ] f ; The file is now the block read above fail ; Backtrack to the saved position | into block! r ; Now recurse into the new block ] (d: ds) ; Pop the directory stack ; Otherwise backtrack and skip | skip ] ; end any ] ; end use ] ; end parse res ; This is the expanded directory block ] | |
BrianH: 8-Nov-2008 | I am the editor of the PARSE proposals. It was decided that I perform this role because Carl is focused on the GUI work right now and someone qualified had to do it. With Carl busy and Ladislav not here, I am the one left who has the most background in parsing and the most understanding of what can be done efficiently and what can't. When the PARSE REPs of old were discussed, I was right there in the conversation and the originator of about half of them, mostly based on my experience with other parsers and parser generators. Because of this I am well aware of the original motivation behind them, and have had many years to think them through. It's just head start, really. I am also the author of the current implementation of COLLECT and KEEP, based on Gabriele's original idea, which was a really great idea. It is also really limited. Collecting information and building data structures out of it is the basic function that programming languages do, and something that REBOL is really good at. I am not in any way denigrating the importance of building data structures. I certainly did not mean to imply that your appreciation of that important task was in any way less important. The role of an editor is not just to collect proposals, but to make sure they fit with the overall goal of the project. This sometimes means rejecting proposals, or reshaping them. This is not a role that I am sorry about - someone has to do it to make our tool better. We are not Perl, this is not anything goes, we actually try to make the best decisions here. I hate to seem the bad guy sometimes, but someone has to do it :( PARSE is a portion of REBOL that is dedicated to a particular role. It recognizes patterns in data, extracts some of the data, and then calls out to the DO dialect to do something with the data. It doesn't really do anything to the data itself - everything happens in the DO dialect code in the parens. It is fairly simple really, and from carefully designed simplicity it gets a heck of a lot of power and speed. That is its strength. The thing that a lot of people don't remember when making improvements to a dialect like PARSE is that PARSE is only one part of REBOL. If something doesn't go into PARSE, it can go into another part of REBOL. We have to consider the language as a whole when we are doing things like this. Here is the overall rationale for the PARSE dialect proposals: - All new features need to be simple to explain and use, and fast at runtime. - A good feature would be one of these: - An extremely powerful enhancement of PARSE's language recognition. - A fix to a design flaw in an existing feature, or a compatibility fix. - A serious improvement to a sufficiently common use case, or common error. The reason I didn't want to put COLLECT and KEEP into PARSE is because it is a small part of a much bigger problem that really needs a lot of flexibility. Different structure collection and building situations require different behavior. It just so happens that the DO dialect is much better suited to solving this particular problem than the PARSE dialect is. Remember, PARSE is a native dialect, and as such is rather fixed. There are some PARSE proposals that make parse actually do something with the data itself: CHANGE, INSERT and REMOVE. We were very careful when we designed those proposals. In particular, we wanted to provide the bare minimum that would be necessary to handle some very common idioms that are usually done wrong, even by the best PARSE programmers. Sometimes we add stuff into REBOL that is just there to solve a commonly messed up problem, so that a well debugged solution would be there for people to choose instead of trying to solve it again themselves, badly. (This is why the MOVE function got added to R3 and 2.7.6, btw.) Even with that justification those features might not make it into PARSE because they change the role of PARSE from recognition to modification. I have high hopes, though. Another proposal that might not make it into PARSE is RETURN. RETURN is another ease-of-use addition. In particular, the thing it makes easy is stopping the parse in the middle to return some recognized information. However, it changes the return characteristics of PARSE in ways that may have unpredictable results, and may not have enough benefit. The proposal that has a better chance of making it is BREAK/return, though I'd like to see both (we can hope, right?). Most of the REPs from Gabriele's doc have been covered. Most of them have been changed because we have had time in the last several years to give them some thought; the only unchanged ones are NOT and FAIL, so far. Some have been rejected because they just weren't going to work at all (8 and 12). THROW and DO are still under discussion - the proposals won't work as is, but the ideas behind them have merit. The rest have been debated and changed into good proposals. Note that the DO proposal would be rejected outright for R2, but R3's changes to word binding make it possible to make it safe (as figured out during a conversation with Anton this evening). There are other features that are not really changes to the PARSE dialect, and so are out of scope for these proposals. That doesn't mean that they won't be implemented, just that they are a separate subject. That includes delimiter parsing (sorry, Petr), tracing (sorry, Henrik), REBOL language syntax (sorry, Graham), and port parsing (sorry, Steeve, Anton, Doc, Tomc, et al). If it makes you feel better, while discussing the subject with Anton here I figured out a way to do port parsing with the R3 port model (it wouldn't work with the R2 port model). I will bring these all up with Carl when it comes to that. I hope that this makes the situation and my position on the subject clearer. I'm sorry for any misunderstandings that arose during this process. | |
BrianH: 8-Nov-2008 | I have run out of ideas, and am asking for more. Through discussions with Carl I have a pretty good idea about what would be rejected, and what has already been rejected. If you want to make more suggestions, please review the proposals that have been made already in the Parse Proposals wiki and Gabriele's REPs. If your suggestion is covered by something suggested in one of those places you can be sure that they have already been debated to death. If not, I'd love to hear it :) | |
Anton: 8-Nov-2008 | Just some ideas for possible usage. [[item1 item2 | item2 item1 SWAP] ] ; Put previous two matched items in order. ==> [item1 item2] ; Always sorted. [ROT [a b c d e]] ; Rotate items matched by next subrule, if it matches. ==> [b c d e a] ; [start: a [b c] DUP start] ; Duplicate items from start to current parse index. ==> [a b c a b c] [a DROP [b c]] ; If next subrule matches, then remove items matched, and set parse index back to the beginning of the remove. ==> [a] (DROP is just like REMOVE, so not really needed, I think. Just doing the exercise to see.) The above can be categorized by how they fetch their arguments: - Take two previously matched items/subrules (like SWAP). - Match the next subrule (like ROT, DROP). - Use a variable to take the parse index (like DUP). | |
BrianH: 17-Nov-2008 | Your example with alternates (and bug fixes, still ignoring leap years): m31: ["Jan" | "Mar" | "May" | "Jul" | "Aug" | "Oct" | "Dec"] ; joins were in wrong direction m30: join m31 [| "Apr" | "Jun" | "Sep" | "Nov"] m28: join m30 [| "Feb"] b28: next repeat x 28 [repend [] ['| form x]] ; next to skip leading |, numbers don't work in string parsing b30: ["29" | "30"] ; optimization based on above reversed joins b31: ["31"] parse date-str [ b28 "-" m28 | b30 "-" m30 | b31 "-" m31 ] The above with CHECK instead: m31: ["Jan" "Mar" "May" "Jul" "Aug" "Oct" "Dec"] m30: join m31 ["Apr" "Jun" "Sep" "Nov"] m28: join m30 ["Feb"] b28: repeat x 28 [append [] form x] ; not assuming b30: ["29" "30"] ; optimization based on above reversed joins b31: ["31"] parse date-str [ copy d some digit "-" copy m some alpha check ( any [ all [find b31 d find m31 m] all [find b30 d find m30 m] all [find b28 d find m28 m] ]) ] Which would be faster would depend on the data and scenario. | |
BrianH: 17-Nov-2008 | Here's a simpler date checker with CHECK: parse date-str [copy d [1 2 digit "-" 3 alpha "-" 4 digit] check (attempt [to-date d])] | |
Chris: 18-Nov-2008 | 'append would do it... numbers don't work in string parsing - I thought about this when I developed the example, thought it might be possible as the numbers appear outside the dialect. But 'check seems like the better option. joins were in the wrong direction - d'oh! simpler date checker - that's only useful if to-date recognizes the date format : ) (and using dates was illustrative - there are other situations with similar needs). Though on dates, what would be the most succinct way with the proposals on the table to do the following? ameridate: "2/15/2008" parse ameridate ...rule... newdate = 15-Feb-2008 One attempt: parse ameridate [ use [d m][ change [copy m 1 2 digit "/" copy d 1 2 digit] (rejoin [d "/" m]) ] "/" 4 digit end check (newdate: to-date ameridate) ] | |
Janko: 31-Jan-2009 | the last problem I had and steeve and oldes propsoed solutions... I got steeve's one but I don't get what "complement charset" in olde's does.. >>str: "a.b.c.d!e?f. " chars: complement charset ".!?" >> parse str [any chars tmp: to end (uppercase tmp)] str == "a.B.C.D!E?F. "<< | |
Oldes: 1-Feb-2009 | Is there any better way how to change the main parse rules during parse like this one? (just a simple example..in real life the lexers would be more complicated :) d: charset "0123456789" lexer1: [copy x 1 skip (probe x if x = "." [lexer: lexer2]) | end skip] lexer2: [copy x some d (probe x lexer: lexer1) | end skip] lexer: lexer1 parse "abcd.123efgh" [ some [() lexer]] | |
Oldes: 2-Feb-2009 | I really like REBOL when I'm able to do things like: c1: context [ n: 1 lexer: [copy x 1 skip (prin reform ["in context:" n "=> "] probe x if x = "." [root-lexer: c2/lexer]) | end skip] ] c2: context [ n: 2 d: charset "0123456789" lexer: [copy x some d (prin reform ["in context:" n"=> "] probe x root-lexer: c1/lexer) | end skip] ] root-lexer: c1/lexer parse "abcd.123efgh" [ some [() root-lexer]] | |
Janko: 14-Feb-2009 | >> T: K: D: "" parse doc [ SOME [ thru "<meta" "name=" skip [ "description" (V: 'D) | "keywords" (V: 'K)] skip "content=" m: skip (m1: first m ) copy T to m1 (set V T) ] to end ] ?? K ?? D K: {Company Directory, Join Us, Advanced Search, Trade Leads, Forum, Trade Shows, Advertising, Translation, fair trade, trade portal, business to business, tr ade leads, trade events, china export, china manufacturer} D: {New international trade portal and company directory for Asia, Europe and North America. Our priority No.1 is to create and maintain a safe, well lit busi ness-to-business marketplace, by assisting our members in identifying new trustworthy business partners!} == {New international trade portal and company directory for Asia, Europe and North America. Our priority No.1 is to create and mai... >> | |
Maxim: 17-May-2009 | it took me about 30 seconds to solve it with lines. with a single parse rule, after 15m I was still trying to corner a simple detail that meant rewriting the whole rules, or adding a new rule, just for one specific situation. Had I started with another rule setup, I'd encountered another nagging situation (like yours has tumbled upon). my time / hour is worth more than 2 milliseconds my of my computer consuming 1/4 watt of electricity. Using 500 bytes more of ram that is recycled, also isn't worth consideration. like I said, I'm pragmatic, that's all there is to it. | |
BrianH: 23-Jun-2009 | In R2: >> parse/all { X X XX X X} [(prin 'a) some [(prin 'b) "X" (prin 'c) [(prin 'd) "X" (prin 'e) | (prin 'f) skip (prin 'g)] (prin 'h) | (prin 'i) skip (prin 'j)] (prin 'k)] abijbcdfghbcdfghbijbcdehbijbcdfghbcdfijbik== true In R3: >> parse/all { X X XX X X} [(prin 'a) some [(prin 'b) "X" (prin 'c) [(prin 'd) "X" (prin 'e) | (prin 'f) skip (prin 'g)] (prin 'h) | (prin 'i) skip (prin 'j)] (prin 'k)] abijbcdfghbcdfghbijbcdehbijbcdfghbcdfijk== true In both cases the fij near the end should should be fgh - a bug in PARSE. | |
PatrickP61: 17-Jul-2009 | Hi All, I'm new to PARSE, so I've come here to learn a little more. I'm working on and off on a little testing project of my own for R3. My goal is to navigate through some website(s), capture Rebol code, and the expeceted responses such as this page: http://rebol.com/r3/docs/functions/try.html I'd like to capture the text inside a block like this: [ "cmd" {if error? try [1 + "x"] [print "Did not work."]} rsp {Did not work.} cmd {if error? try [load "$10,20,30"] [print "No good"]} rsp {No good}] Can anyone point me to some parse example code which can "tear apart" an HTTP page based on text and the type of text? I realize I may be biting off a bit more than I can chew, but I'd still like to give it a try. Thanks in advance. | |
RobertS: 28-Sep-2009 | I put a note up because of my silly misunderstanding of the intent of adding AND to PARSE. But I get odd results with the likes of parse "abeabd" [and [thru "e"] [thru "d'"]] which behaves like ANY | |
RobertS: 30-Sep-2009 | I am still guessing at what is intended in R3-a84 but the first looks OK and the second looks like a bug >> parse "abad" [thru "a" stay [to "b"] (print "at b") thru "d"] at b == true >> parse "abad" [stay thru "c" (print "at c") [to "b"] thru "d"] at c == true ; BUT must still be a bug | |
Pekr: 1-Oct-2009 | >> parse d: "abc" [change skip 123] >> d == "123bc" | |
Gregg: 2-Dec-2009 | It's not necessarily a PARSE limitation, but there are things we'd like PARSE to do that aren't always reasonable. :-) TO and THRU can work very well, but that doesn't mean they'll work for every situation. You may have to use rules where you check for your target value or just SKIP, marking locations in the input as you go. | |
Graham: 7-Feb-2010 | I want to extract all the dates ( dd-mmm-yy, dd mmm yyyy d mmmmmmm yy ) extract-dates: func [ txt /local months dates days month year ][ dates: copy [] months: copy [] digit: charset [ #"0" - #"9" ] digits: [ some digit ] foreach mon system/locale/months [ repend months [ mon '| copy/part mon 3 '| ] ] remove back tail months parse txt [ some [ to 1 2 digits copy days 1 2 digit [ #" " | #"-" ] copy month months [ #" " | #"-" ] copy year [ 4 digits | 2 digits ] ( repend dates rejoin [ days "-" month "-" year ] ) | thru 1 2 digits ?? ] ] dates ] extract-dates "asdf sdfsf 11 Jan 2008 12-January-10 fasdfsaf asdf as 11 2 3 3 13-Feb-08 asdfasf " | |
Anton: 30-Jul-2010 | Ok, continuing the discussion from "Performance" group, I'd like to ask for some help with parsing rebol format files. Basically, I'd like to be able to extract a block near the beginning or end of a file, while minimizing disk access. The files to be parsed could be large, so I don't want to load the entire contents, but chunks at a time. So my parse rule should be able to detect when the input has been exhausted and ask for another chunk. (When extracting a block near the end of a file, I'll have to parse in reverse, but I'll try to implement that later.) | |
Anton: 30-Jul-2010 | Using LOAD/NEXT, I still have to use a O(n^2) algorithm. I'd now like to do my own parse, which can be O(n). | |
Sunanda: 4-Nov-2010 | Question on StackOverflow.....there must be a better answer than mine, and I'd suspect it involves PARSE (better answers usually do:) http://stackoverflow.com/questions/4093714/is-there-finer-granularity-than-load-next-for-reading-structured-data | |
Ladislav: 1-Dec-2010 | >> parse [a b c/d/e] [2 word! into [3 word!]] == true | |
Group: Dialects ... Questions about how to create dialects [web-public] | ||
btiffin: 15-Sep-2006 | Requesting Opinions. Being a crusty old forther, I really really miss the immersive nature of the block editor environment. Coding in forth meant never leaving forth. Editor, debugger, disk drivers etc... all forth commands. No need to ever have the brain exit forth mode. Now that Rebol is my language of the future, I kinda pine for the past. The wonder and beauty of Rebol keeps being interrupted by decisions on what to use to edit that last little bit of script. Notepad, Crimson Editor, Rebol editor? A small annoyance but it still disrupts the brain from getting to streaming mode. So now to the question. My first crack at a forth block editor dialect failed miserably. Dialects need to be LOADable for parse to function. Editing source code makes for unloadable situations. Do I just give up on it and learn to live in the third millenium? Write a utility that doesn't use dialects (which seems to unRebol the solution)? I thought I'd ask you guys, just in case there is a light shining in front of me that I can't see. Thanks in advance. | |
Chris: 4-Mar-2010 | I'm rethinking the behaviour of my 'import dialect (library: http://bit.ly/rebimport ) when working with structured data. At it's simplest form, 'import filters a block of key-string pairs based on a supplied set of constraints: import [a "1"][a: integer! is more-than 0] == [a 1] ; or none if the constraints are not met There are two nested forms I'd like to support: 1) a continuation of key-value blocks [a [b "1"]] and 2) a block of values [c ["b" "1" "foo"]] The first could just be a recursive function or parse call. The second needs a little more thought - on the face of it, it could just verify the contents conform to a preset group: [ ["a" "b"] contains some of ["a" "b" "c"] ] (or any of), which'd be fine for validating web form input (eg. multi-select list), but would rule out, say, a JSON block containing objects (as key-value pairs). I'm trying to figure out if this is overkill or a genuinely useful way of validating structured data... Then there's ["1" "2" "3"] <- be nice to validate as [some integer!] or [some integer! | decimal!]. I don't want it to be overly complex, but it should at least be useful - anyone have any conventional cases for validating a block of strings? | |
Group: !RebGUI ... A lightweight alternative to VID [web-public] | ||
Volker: 28-Apr-2005 | n: 100'000 bench: func [code] [t1: now/precise loop n code print [difference now/precise t1 mold code] ] bench [switch 'f [a [] b [] c [] d [] e [] f [] g [] h []]] bench [parse [f] ['a () | 'b () | 'c () | 'd () | 'f () | 'g () | 'h ()]] | |
Chris: 5-Jun-2005 | REBOL [] load-include: func [include [any-block!]][ either parse inlcude reduce [to-issue 'include file!][load include/2][include] ] RebGUI: context [ d: "D" ctx: do bind load-include [#include %include.r] 'self ] RebGUI/ctx/b RebGUI/ctx/c | |
Group: XML ... xml related conversations [web-public] | ||
Sunanda: 1-Nov-2005 | Carl has talked several times about a binary format for saving REBOL structures (can't find any references off-hand). That would probably solve this problem as what is saved is, in effect. the internal in-memory format: useless for non-REBOL data exchange and perhaps dangerous for cross-REBOL releases data exchange, but much much faster as it'd avoid most of the parse and load that REBOL does now. | |
BrianH: 29-Apr-2006 | You can do some structural pattern matching with parse rules, but with how parse is currently implemented it can be a little awkward. The lack of arguments to parse rules make recursion quite difficult, and the lack of local variables make the rules difficult to use concurrently. It is difficult to examine both the data type and the value of elements in block parsing, to switch to string parsing mode for string elements, to parse lists, hashes or parens, to direct the parse flow based on semantic criteria (which is needed to work around any of these other problems). And don't even get me started on the difficulties of structure rebuilding. The thing that is the most difficult to do in parse is the easiest thing to do with regexes: Search and replace. Didn't we make a web site years ago collecting suggestions for improving parse? Wasn't a replace operation one of those suggestions? What happened with that? Structural pattern matching and rebuilding currently has to be done with a mix of parse and REBOL code that is tricky to write and debug. If parse doesn't get improved, I'd rather use a nice declarative dialect, preferably with before and after structures, and have the dialect processor generate the parse and REBOL code for me. If that dialect is powerful enough to be written in itself then we'll really be cooking. | |
Group: SVG Renderer ... SVG rendering in Draw AGG [web-public] | ||
shadwolf: 23-Jun-2005 | REBOL [ Title: "SVG Demo" Owner: "Ashley G. Trüter" Version: 0.0.1 Date: 21-Jun-2005 Purpose: "Loads and displays a resizeable SVG file." History: { 0.0.1 Initial release } Notes: { Tested on very simple SVG icons Only a few basic styles / attributes / commands supported Does not handle sizes in units other than pixels (e.g. pt, in, cm, mm, etc) SVG path has an optional close command, "z" ... AGG shape equivalent auto-closes load-svg function needs to be totally refactored / optimized ... *sample only* } ] ; The following commands are available for path data: ; ; M = moveto ; L = lineto ; H = horizontal lineto ; V = vertical lineto ; C = curveto ; S = smooth curveto ; Q = quadratic Belzier curve ; T = smooth quadratic Belzier curveto ; A = elliptical Arc ; Z = closepath ;print: none ; comment out this line to enable debug messages load-svg: function [svg-file [file! string!] size [pair!]] [ id defs x y to-color to-byte draw-blk append-style svg-size scale-x scale-y ][ xml: either string? svg-file [parse-xml svg-file] [ unless %.svg = suffix? svg-file [to error! "File has an invalid suffix!"] parse-xml read svg-file ] unless xml/3/1/1 = "svg" [to error! "Could not find SVG header!"] ;unless find ["id" "xmlns"] xml/3/1/2/1 [to error! "Could not find ID header!"] ;unless xml/3/1/3/1/1 = "defs" [to error! "Could not find DEFS header!"] id: xml/3/1/2 defs: xml/3/1/3 ; ; --- Parse SVG id ; svg-size: either find ["32pt" "48pt" "72pt"] select id "width" [ switch select id "width" [ "72pt" [120x120] "48pt" [80x80] "32pt" [60x60] ] ][ as-pair to integer! any [select id "width" "100"] to integer! any [select id "height" "100"] ] x: to integer! any [select id "x" "0"] y: to integer! any [select id "y" "0"] scale-x: size/x / svg-size/x scale-y: size/y / svg-size/y ; ; --- Helper functions ; to-color: func [s [string!]] [ ; converts a string in the form "#FFFFFF" to a 4-byte tuple to tuple! load rejoin ["#{" next s "00}"] ] to-byte: func [s [string!]] [ ; converts a string with a value 0-1 to an inverted byte 255 - to integer! 255 * to decimal! s ] ; ; --- Parse SVG defs ; draw-blk: copy [] append-style: function [ command [string!] blk [block!] ][ x xy pen-color fill-color line-width mode size radius shape closed? matrix transf-command ][ xy: 0x0 size: 0x0 line-width: 1 matrice: make block! [] radius: none transf-command: none foreach [attr val] blk [ switch attr [ "transform" [print "tranform have been found" ;probe val halt val: parse val "()," transf-command: first val probe transf-command switch transf-command [ "matrix" [ foreach word val [ if not find word "matrix" [ insert tail matrice to-decimal word ] ] ] ] ] "style" [ foreach [attr val] parse val ":;" [ switch/default attr [ "font-size" [ ] "stroke" [ switch/default first val [ #"#" [pen-color: to-color val] #"n" [pen-color: none] ][ print ["Unknown stroke:" val] ] ] "stroke-width" [line-width: to decimal! val] "fill" [ fill-color: switch/default first val [ #"#" [to-color val] #"n" [none] ][ print ["Unknown fill value:" val] none ] ] "fill-rule" [ mode: switch/default val [ "evenodd" ['even-odd] ][ print ["Unknown fill-rule value:" val] none ] ] "stroke-opacity" [pen-color: any [pen-color 0.0.0.0] pen-color/4: to-byte val] "fill-opacity" [fill-color: any [fill-color 0.0.0.0] fill-color/4: to-byte val] "stroke-linejoin" [ insert tail draw-blk switch/default val [ "miter" [compose [line-join miter]] "round" [compose [line-join round]] "bevel" [compose [line-join bevel]] ][ print ["Unknown stroke-linejoin value:" val] none ] ] "stroke-linecap" [ insert tail draw-blk 'line-cap insert tail draw-blk to word! val ] ][ print ["Unknown style:" attr] ] ] ] "x" [xy/x: scale-x * val] "y" [xy/y: scale-y * val] "width" [size/x: scale-x * val] "height" [size/y: scale-y * val] "rx" [print "rx"] "ry" [radius: to decimal! val] "d" [ shape: copy [] x: none closed?: false foreach token load val [ switch/default token [ M [insert tail shape 'move] C [insert tail shape 'curve] L [insert tail shape 'line] z [closed?: true] ][ unless number? token [print ["Unknown path command:" token]] either x [insert tail shape as-pair x scale-y * token x: none] [x: scale-x * token] ] ] ] ] ] insert tail draw-blk compose [ pen (pen-color) fill-pen (fill-color) fill-rule (mode) line-width (line-width * min scale-x scale-y) ] switch command [ "rect" [ insert tail draw-blk compose [box (xy) (xy + size)] if radius [insert tail draw-blk radius] ] "path" [ unless closed? [print "Path closed"] either transf-command <> none [ switch transf-command [ "matrix" [insert tail draw-blk compose/only [ (to-word transf-command) (matrice) shape (shape) reset-matrix]] ] ][ insert tail draw-blk compose/only [shape (shape)] ] ] "g" [ print "Write here how to handle G insertion to Draw block" insert tail draw-blk probe compose/only [reset-matrix (to-word transf-command) (matrice)] ] ] ] probe defs foreach blk defs [ switch first blk [ "rect" [append-style first blk second blk] "path" [append-style first blk second blk] "g" [ print "key word" probe first blk print "matrix and style in G" probe second blk append-style first blk second blk ;print "what to draw in G" probe third blk foreach blk2 third blk [ probe blk2 switch first blk2[ "path" [append-style first blk2 second blk2] ] ] ] ] ] probe draw-blk draw-blk ] view make face [ offset: 100x100 size: 200x200 action: request-file/filter/only "*.svg" text: rejoin ["SVG Demo [" last split-path action "]"] data: read action color: white effect: compose/only [draw (load-svg data size)] edge: font: para: none feel: make feel [ detect: func [face event] [ if event/type = 'resize [ insert clear face/effect/draw load-svg face/data face/size show face ] if event/type = 'close [quit] ] ] options: [resize] ] | |
shadwolf: 23-Jun-2005 | REBOL [ Title: "SVG Demo" Owner: "Ashley G. Trüter" Version: 0.0.1 Date: 21-Jun-2005 Purpose: "Loads and displays a resizeable SVG file." History: { 0.0.1 Initial release } Notes: { Tested on very simple SVG icons Only a few basic styles / attributes / commands supported Does not handle sizes in units other than pixels (e.g. pt, in, cm, mm, etc) SVG path has an optional close command, "z" ... AGG shape equivalent auto-closes load-svg function needs to be totally refactored / optimized ... *sample only* } ] ; The following commands are available for path data: ; ; M = moveto ; L = lineto ; H = horizontal lineto ; V = vertical lineto ; C = curveto ; S = smooth curveto ; Q = quadratic Belzier curve ; T = smooth quadratic Belzier curveto ; A = elliptical Arc ; Z = closepath ;print: none ; comment out this line to enable debug messages load-svg: function [svg-file [file! string!] size [pair!]] [ id defs x y to-color to-byte draw-blk append-style svg-size scale-x scale-y ][ xml: either string? svg-file [parse-xml svg-file] [ unless %.svg = suffix? svg-file [to error! "File has an invalid suffix!"] parse-xml read svg-file ] unless xml/3/1/1 = "svg" [to error! "Could not find SVG header!"] ;unless find ["id" "xmlns"] xml/3/1/2/1 [to error! "Could not find ID header!"] ;unless xml/3/1/3/1/1 = "defs" [to error! "Could not find DEFS header!"] id: xml/3/1/2 defs: xml/3/1/3 ; ; --- Parse SVG id ; svg-size: either find ["32pt" "48pt" "72pt"] select id "width" [ switch select id "width" [ "72pt" [120x120] "48pt" [80x80] "32pt" [60x60] ] ][ as-pair to integer! any [select id "width" "100"] to integer! any [select id "height" "100"] ] x: to integer! any [select id "x" "0"] y: to integer! any [select id "y" "0"] scale-x: size/x / svg-size/x scale-y: size/y / svg-size/y ; ; --- Helper functions ; to-color: func [s [string!]] [ ; converts a string in the form "#FFFFFF" to a 4-byte tuple to tuple! load rejoin ["#{" next s "00}"] ] to-byte: func [s [string!]] [ ; converts a string with a value 0-1 to an inverted byte 255 - to integer! 255 * to decimal! s ] ; ; --- Parse SVG defs ; draw-blk: copy [] append-style: function [ command [string!] blk [block!] ][ x xy pen-color fill-color line-width mode size radius shape closed? matrix transf-command ][ xy: 0x0 size: 0x0 line-width: 1 matrice: make block! [] radius: none transf-command: none foreach [attr val] blk [ switch attr [ "transform" [print "tranform have been found" ;probe val halt val: parse val "()," transf-command: first val probe transf-command switch transf-command [ "matrix" [ foreach word val [ if not find word "matrix" [ insert tail matrice to-decimal word ] ] ] ] ] "style" [ foreach [attr val] parse val ":;" [ switch/default attr [ "font-size" [ ] "stroke" [ switch/default first val [ #"#" [pen-color: to-color val] #"n" [pen-color: none] ][ print ["Unknown stroke:" val] ] ] "stroke-width" [line-width: to decimal! val] "fill" [ fill-color: switch/default first val [ #"#" [to-color val] #"n" [none] ][ print ["Unknown fill value:" val] none ] ] "fill-rule" [ mode: switch/default val [ "evenodd" ['even-odd] ][ print ["Unknown fill-rule value:" val] none ] ] "stroke-opacity" [pen-color: any [pen-color 0.0.0.0] pen-color/4: to-byte val] "fill-opacity" [fill-color: any [fill-color 0.0.0.0] fill-color/4: to-byte val] "stroke-linejoin" [ insert tail draw-blk switch/default val [ "miter" [compose [line-join miter]] "round" [compose [line-join round]] "bevel" [compose [line-join bevel]] ][ print ["Unknown stroke-linejoin value:" val] none ] ] "stroke-linecap" [ insert tail draw-blk 'line-cap insert tail draw-blk to word! val ] ][ print ["Unknown style:" attr] ] ] ] "x" [xy/x: scale-x * val] "y" [xy/y: scale-y * val] "width" [size/x: scale-x * val] "height" [size/y: scale-y * val] "rx" [print "rx"] "ry" [radius: to decimal! val] "d" [ shape: copy [] x: none closed?: false foreach token load val [ switch/default token [ M [insert tail shape 'move] C [insert tail shape 'curve] S [insert tail shape 'curv] L [insert tail shape 'line] Q [insert tail shape 'qcurve] T [insert tail shape 'qcurv] z [closed?: true] H [insert tail shape 'hline] V [insert tail shape 'vline] A [insert tail shape 'arc] ][ unless number? token [print ["Unknown path command:" token]] either x [insert tail shape as-pair x scale-y * token x: none] [x: scale-x * token] ] ] ] ] ] insert tail draw-blk compose [ pen (pen-color) fill-pen (fill-color) fill-rule (mode) line-width (line-width * min scale-x scale-y) ] switch command [ "rect" [ insert tail draw-blk compose [box (xy) (xy + size)] if radius [insert tail draw-blk radius] ] "path" [ unless closed? [print "Path closed"] either transf-command <> none [ switch transf-command [ "matrix" [insert tail draw-blk compose/only [ (to-word transf-command) (matrice) shape (shape) reset-matrix]] ] ][ insert tail draw-blk compose/only [shape (shape)] ] ] "g" [ print "Write here how to handle G insertion to Draw block" insert tail draw-blk probe compose/only [reset-matrix (to-word transf-command) (matrice)] ] ] ] probe defs foreach blk defs [ switch first blk [ "rect" [append-style first blk second blk] "path" [append-style first blk second blk] "g" [ print "key word" probe first blk print "matrix and style in G" probe second blk append-style first blk second blk ;print "what to draw in G" probe third blk foreach blk2 third blk [ probe blk2 switch first blk2[ "path" [append-style first blk2 second blk2] ] ] ] ] ] probe draw-blk draw-blk ] view make face [ offset: 100x100 size: 200x200 action: request-file/filter/only "*.svg" text: rejoin ["SVG Demo [" last split-path action "]"] data: read action color: white effect: compose/only [draw (load-svg data size)] edge: font: para: none feel: make feel [ detect: func [face event] [ if event/type = 'resize [ insert clear face/effect/draw load-svg face/data face/size show face ] if event/type = 'close [quit] ] ] options: [resize] ] | |
shadwolf: 23-Jun-2005 | REBOL [ Title: "SVG Demo" Owner: "Ashley G. Trüter" Version: 0.0.1 Date: 21-Jun-2005 Purpose: "Loads and displays a resizeable SVG file." History: { 0.0.1 Initial release } Notes: { Tested on very simple SVG icons Only a few basic styles / attributes / commands supported Does not handle sizes in units other than pixels (e.g. pt, in, cm, mm, etc) SVG path has an optional close command, "z" ... AGG shape equivalent auto-closes load-svg function needs to be totally refactored / optimized ... *sample only* } ] ; The following commands are available for path data: ; ; M = moveto ; L = lineto ; H = horizontal lineto ; V = vertical lineto ; C = curveto ; S = smooth curveto ; Q = quadratic Belzier curve ; T = smooth quadratic Belzier curveto ; A = elliptical Arc ; Z = closepath ;print: none ; comment out this line to enable debug messages load-svg: function [svg-file [file! string!] size [pair!]] [ id defs x y to-color to-byte draw-blk append-style svg-size scale-x scale-y ][ xml: either string? svg-file [parse-xml svg-file] [ unless %.svg = suffix? svg-file [to error! "File has an invalid suffix!"] parse-xml read svg-file ] unless xml/3/1/1 = "svg" [to error! "Could not find SVG header!"] ;unless find ["id" "xmlns"] xml/3/1/2/1 [to error! "Could not find ID header!"] ;unless xml/3/1/3/1/1 = "defs" [to error! "Could not find DEFS header!"] id: xml/3/1/2 defs: xml/3/1/3 ; ; --- Parse SVG id ; svg-size: either find ["32pt" "48pt" "72pt"] select id "width" [ switch select id "width" [ "72pt" [120x120] "48pt" [80x80] "32pt" [60x60] ] ][ as-pair to integer! any [select id "width" "100"] to integer! any [select id "height" "100"] ] x: to integer! any [select id "x" "0"] y: to integer! any [select id "y" "0"] scale-x: size/x / svg-size/x scale-y: size/y / svg-size/y ; ; --- Helper functions ; to-color: func [s [string!]] [ ; converts a string in the form "#FFFFFF" to a 4-byte tuple to tuple! load rejoin ["#{" next s "00}"] ] to-byte: func [s [string!]] [ ; converts a string with a value 0-1 to an inverted byte 255 - to integer! 255 * to decimal! s ] ; ; --- Parse SVG defs ; draw-blk: copy [] append-style: function [ command [string!] blk [block!] ][ x xy pen-color fill-color line-width mode size radius shape closed? matrix transf-command ][ xy: 0x0 size: 0x0 line-width: 1 matrice: make block! [] radius: none transf-command: none foreach [attr val] blk [ switch attr [ "transform" [print "tranform have been found" ;probe val halt val: parse val "()," transf-command: first val probe transf-command switch transf-command [ "matrix" [ foreach word val [ if not find word "matrix" [ insert tail matrice to-decimal word ] ] ] ] ] "style" [ foreach [attr val] parse val ":;" [ switch/default attr [ "font-size" [ ] "stroke" [ switch/default first val [ #"#" [pen-color: to-color val] #"n" [pen-color: none] ][ print ["Unknown stroke:" val] ] ] "stroke-width" [line-width: to decimal! val] "fill" [ fill-color: switch/default first val [ #"#" [to-color val] #"n" [none] ][ print ["Unknown fill value:" val] none ] ] "fill-rule" [ mode: switch/default val [ "evenodd" ['even-odd] ][ print ["Unknown fill-rule value:" val] none ] ] "stroke-opacity" [pen-color: any [pen-color 0.0.0.0] pen-color/4: to-byte val] "fill-opacity" [fill-color: any [fill-color 0.0.0.0] fill-color/4: to-byte val] "stroke-linejoin" [ insert tail draw-blk switch/default val [ "miter" [compose [line-join miter]] "round" [compose [line-join round]] "bevel" [compose [line-join bevel]] ][ print ["Unknown stroke-linejoin value:" val] none ] ] "stroke-linecap" [ insert tail draw-blk 'line-cap insert tail draw-blk to word! val ] ][ print ["Unknown style:" attr] ] ] ] "x" [xy/x: scale-x * val] "y" [xy/y: scale-y * val] "width" [size/x: scale-x * val] "height" [size/y: scale-y * val] "rx" [print "rx"] "ry" [radius: to decimal! val] "d" [ shape: copy [] x: none closed?: false if all [x not number? token] [ insert tail shape x * either token = 'V [scale-y][scale-x] x: none ] foreach token load val [ switch/default token [ M [insert tail shape 'move] C [insert tail shape 'curve] S [insert tail shape 'curv] L [insert tail shape 'line] Q [insert tail shape 'qcurve] T [insert tail shape 'qcurv] z [closed?: true] H [insert tail shape 'hline] V [insert tail shape 'vline] A [insert tail shape 'arc] ][ unless number? token [print ["Unknown path command:" token]] either x [insert tail shape as-pair x scale-y * token x: none] [x: scale-x * token] ] ] ] ] ] insert tail draw-blk compose [ pen (pen-color) fill-pen (fill-color) fill-rule (mode) line-width (line-width * min scale-x scale-y) ] switch command [ "rect" [ insert tail draw-blk compose [box (xy) (xy + size)] if radius [insert tail draw-blk radius] ] "path" [ unless closed? [print "Path closed"] either transf-command <> none [ switch transf-command [ "matrix" [insert tail draw-blk compose/only [ (to-word transf-command) (matrice) shape (shape) reset-matrix]] ] ][ insert tail draw-blk compose/only [shape (shape)] ] ] "g" [ print "Write here how to handle G insertion to Draw block" insert tail draw-blk probe compose/only [reset-matrix (to-word transf-command) (matrice)] ] ] ] probe defs foreach blk defs [ switch first blk [ "rect" [append-style first blk second blk] "path" [append-style first blk second blk] "g" [ print "key word" probe first blk print "matrix and style in G" probe second blk append-style first blk second blk ;print "what to draw in G" probe third blk foreach blk2 third blk [ probe blk2 switch first blk2[ "path" [append-style first blk2 second blk2] ] ] ] ] ] probe draw-blk draw-blk ] view make face [ offset: 100x100 size: 200x200 action: request-file/filter/only "*.svg" text: rejoin ["SVG Demo [" last split-path action "]"] data: read action color: white effect: compose/only [draw (load-svg data size)] edge: font: para: none feel: make feel [ detect: func [face event] [ if event/type = 'resize [ insert clear face/effect/draw load-svg face/data face/size show face ] if event/type = 'close [quit] ] ] options: [resize] ] | |
Group: Rebol School ... Rebol School [web-public] | ||
Gabriele: 6-Jul-2011 | You could trivially change the parser in Topaz to allow [ and ] inside words, and then write something like: a[b c]d but, is that a good thing? So, what's the actual purpose of allowing a,b to be a word? So far, the only purpose has been "to parse other languages as if they were REBOL". That's not a good purpose, because they are *not* REBOL. If you need to parse other syntax, you need string parsing. block parsing is for REBOL dialects. The only sensible reason I can imagine for , to be a word would be to use it as an operator so that: a , b means also a b but that has the same readability problems of using . as a "end of command marker" in dialects. a nice idea in abstract, but terrible in practice. | |
Janko: 6-Jul-2011 | I can just take it as a game, and land some easy punches on places you exposed up-there :) for example, you mention drawing the lines: - I prefer consistency easier parser - consistend languages are often easier to parse various cases where it might look wierd - f (a.b + c'd) ~ is this better? this is valid now :) allowing - I like languages where creator makes a conceptually focused, clear, expandable, consistent "engine" and we can grow and combine that beyond what language maker was able to initially imagine. This Guy (:)) talks about something like this: http://video.google.com/videoplay?docid=-8860158196198824415 REBOL is one of very few languages where this actually is possible, another language like this is Factor. You ask if this could be a word a[ . And in Factor this is the case, and they also found a concrete use for this specific case. It's been more than year so I have to check where I have seen it. (Factor has compilation stage (live compilation too) so they have compile time macros where they can extend syntax in various ways). | |
Gabriele: 7-Jul-2011 | easier parser: adding , does not make the parser easier. It would probably be trivial to allow it *inside* a word, ie. "a,b", but it's going to be more complicated to handle , alone and then worry about its usage within numbers. f (a.b + c'd) - that will either immediately look weird to anyone used to other languages, or be easily seen as passing one argument. f (a,b + c,d) would instead be read as f (a) (b + c) (d) which it is not. This is a weak argument in the sense that people knowing REBOL will probably have little problem with this... but REBOL is "weird" enough already. :) expandable: right, indeed you have string parsing, and you can do whatever you want with it. do you expect other languages to parse whatever is thrown at them? no they don't. you have to write the parser. having anything in words: it remains to be proven whether this makes things better or worse. i suspect "worse". | |
Group: !RebDB ... REBOL Pseudo-Relational Database [web-public] | ||
Sunanda: 11-Feb-2006 | Traditional with embedded SQL, the technique is to use "host variables" which start with a colon: sql reduce “select * from table where [all [col1 = :var1 col2 = :var2 ]]” And you'd magically replace :var1 with the value of var1. Which is almost exactly the behaviour you'd expect from :var1 in REBOL too. If you insist that that host variables always have a space before and after, that makes the whole substitution process a fairly simple parse operation. | |
Group: !REBOL3-OLD1 ... [web-public] | ||
Gabriele: 5-Jun-2007 | having more time, i'd just study the binary file format and parse it myself. i don't see any show-stopper. |
1 / 160 | [1] | 2 |