r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Parse] Discussion of PARSE dialect

Gabriele
15-Oct-2009
[4505x2]
the reason for the // is to allow relative paths like: //www.rebol.com/ 
  where the scheme is the same as the base url. Nobody has ever used 
this; also, it could have been achieved by using :www.rebol.com/ 
instead... so, yeah, it was not really a good idea. I also don't 
think ftp:file.txt (meaning, change scheme, but keep host and path) 
has ever been used and not sure it's supported by software. so in 
practice http:www.rebol.com/ would have worked.
it has to be said, that the www subdomain is also a silly habit.
Pekr
15-Oct-2009
[4507]
As for parse docs - why is 'then not being among the keywords? Isn't 
it a keyword?
BrianH
15-Oct-2009
[4508]
It's an operator, like |, and mentioned in that section near the 
top.
Pekr
15-Oct-2009
[4509]
isn't AND operator too for e.g.?
Chris
17-Oct-2009
[4510]
First pass at converting my AltXML to R3: http://www.ross-gill.com/r/r3xml.r
-- almost a literal translation, doesn't yet utilise new  parse features...
Pekr
17-Oct-2009
[4511x2]
The DO keyword was added to A91 - Gabriele should be happy now :-) 
http://www.rebol.net/r3blogs/0276.html
http://www.rebol.net/r3blogs/0277.html- Trouble renaming BREAK to 
ACCEPT in following case:

some [break | "b"]


.... where it returns FALSE. Carl seems to suggest, that unless that 
case is solved, it can't be renamed ...
Maxim
17-Oct-2009
[4513]
gave my two cents on the blog... hehehe
Pekr
17-Oct-2009
[4514]
Max - we are close to finish parse. It is clear Carl will move to 
other priorities. Sorry to hunt you here or there, but any chance 
you get your Extensions short doc done? No doc = no callback probably 
:-)
Maxim
17-Oct-2009
[4515]
I really want to do it... but I'm so deep into parsing right now 
I don't want to loose the few GB of information in my brain's cache. 
 I'm writing self-modifying parse rules and its pretty nightmarish. 
 although it works.
Gabriele
17-Oct-2009
[4516]
Ah, 6 years of wait finally over. Now, if we could get those changes 
ported to R2...
Maxim
17-Oct-2009
[4517]
hehehe you finally got your DO  ;-)
Henrik
17-Oct-2009
[4518]
Do we have a list of PARSE changes that are no longer compatible 
with R2? I think that would be important in porting parse scripts 
from R2 to R3.
Pekr
17-Oct-2009
[4519x3]
Gabriele - wrong perception :-) The correct claim should be - "An 
now nothing prevents me from fully switching to R3 ..." :-)
An=And
So - we don't need complementing to be enhanced? Because we talked 
about it, but it is not defined in proposal, it is not part of Carl's 
feature table, and I also got no reaction on R3 Chat ....
BrianH
17-Oct-2009
[4522x2]
Gabriele, these changes can be backported to R2 in the form of a 
rule compiler that generates (unreadable) R2 parse rules.
Pekr, we still need complementing to be enhanced. Even Carl has said 
so.
Maxim
17-Oct-2009
[4524x3]
but wouldn't work with remark  ;-)
one situation which complemet can't handle very well (ram wise):

union charset "a" complement charset "b"
you end up with a full codepoint bitset minus one byte if it complemented 
or not
BrianH
17-Oct-2009
[4527x2]
Maxim, Remark could be adjusted to use the rule compiler. For that 
matter, Remark could use R2/Forward (which needs some work, but is 
already better than R2 on its own).
Maxim, that is what Pekr was talking about. That is planned to be 
fixed.
Maxim
17-Oct-2009
[4529x4]
a rule compiler doesn't adapt very well to self-modifying rules
laden with many paren expressions and a stack on top of it.
the rule I am writing now actually does JIT rule compilation... hairy 
to debug :-)
since I use binding to map inner rules which are also constructed 
on the fly but have to be pushed and poped from the stack as I traverse 
data... its a lot of fun  :-D
BrianH
17-Oct-2009
[4533x2]
If the self-modifying rules are strung-together basic blocks, you 
can use the rule compiler to generate the blocks. And the R3 changes 
make self-modifying rules less necessary, so you can have even larger 
basic blocks.
Of course the *result* of the compilation would be self-modifying 
rules :)
Maxim
17-Oct-2009
[4535x2]
really, the problem is not the parsing itself... its getting the 
darn rules to generate the proper rules hehehe.
and its not simple parsing since I use parsing index manipulation, 
which is also dictated by the source data in encounters.  its like 
swatting flies using a fly swatter at the end of a rope, while riding 
a roller coster which changes layout every time you ride it  ;-)
BrianH
17-Oct-2009
[4537]
Which is what a rule compiler does :)  Actually, it sounds like you 
could adapt the tricks of the ruule compiler to *your* rule compiler, 
which would let you use the new operations in your rule source and 
have the workarounds generated in the output.
Maxim
17-Oct-2009
[4538x2]
well, build it and I will try it  ;-)
I promise.
BrianH
17-Oct-2009
[4540]
It's on my list...
Maxim
17-Oct-2009
[4541x3]
my deadline is to have a site working by this week... unless this 
darned bug I am trying to kill doesn't kill me first.
doh... when you're too close to the tree... you can't see the forest... 


I was using TO parse command on a rule ... this obviously won't work....
(it only accepts a string... dummy :-)
Pekr
18-Oct-2009
[4544]
ah, got reply on Chat from Carl towards complementing:


Re #5718: Pekr, that's a good question, and I think the answer must 
be YES. We need to be able to complement bitmaps in a 

nice way". Otherwise, Unicode bitmaps, even if simply used on ASCII 
chars, would take a lot of memory.


This change should be listed on the project sheet, and if not, I'll 
add it there."
Chris
22-Oct-2009
[4545x3]
Is there any advantage in breaking up charsets that represent a large 
varied range of the 16-bit character space? For example, XML names 
are defined as below (excluding > 2 ** 16), but are most commonly 
limited to the ascii-friendly subset:

	w1: charset [

  #"A" - #"Z" #"_" #"a" - #"z" #"^(C0)" - #"^(D6)" #"^(D8)" - #"^(F6)" 
  #"^(F8)" - #"^(02FF)"

  #"^(0370)" - #"^(037D)" #"^(037F)" - #"^(1FFF)" #"^(200C)" - #"^(200D)" 
  #"^(2070)" - #"^(218F)"

  #"^(2C00)" - #"^(2FEF)" #"^(3001)" - #"^(D7FF)" #"^(f900)" - #"^(FDCF)" 
  #"^(FDF0)" - #"^(FFFD)"
	]
	w+: charset [

  #"-" #"." #"0" - #"9" #"A" - #"Z" #"_" #"a" - #"z" #"^(B7)" #"^(C0)" 
  - #"^(D6)"

  #"^(D8)" - #"^(F6)" #"^(F8)" - #"^(037D)" #"^(037F)" - #"^(1FFF)" 
  #"^(200C)" - #"^(200D)"

  #"^(203F)" - #"^(2040)" #"^(2070)" - #"^(218F)" #"^(2C00)" - #"^(2FEF)" 
  #"^(3001)" - #"^(D7FF)"
		#"^(f900)" - #"^(FDCF)" #"^(FDF0)" - #"^(FFFD)"
	]
	word: [w1 any w+]
(sorry if that looks messy)
Both w1 and w+ appear to be very large values.  Would it be smart 
to perhaps do:

	[[aw1 | w1] any [aw+ | w+]]

Where 'aw1 and 'aw+ are limited to ascii values?
Steeve
22-Oct-2009
[4548x4]
Uses R3 (and his optimized complemented bitsets)
Anyway, a bitset with a length of 2 ** 16 is not so huge in memory 
(only 16kb)
64 Kb , sorry
So W1 + W+ = 128Kb

Is this a problem ?
Chris
22-Oct-2009
[4552]
That's what I'm asking.  Complemented bitsets wouldn't make a difference 
here though as the excluded range is of similar scope, right?
Steeve
22-Oct-2009
[4553x2]
It seems
if the size is a problem you can build a function to test each range.
But It will be slow