• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary

worldhits
r4wp5907
r3wp58701
total:64608

results window for this page: [start: 61501 end: 61600]

world-name: r3wp

Group: Parse ... Discussion of PARSE dialect [web-public]
Ladislav:
15-Nov-2011
I need CHANGE too, and the full version with the value you're changing 
to be an expression in a paren

 - this changing during parsing is known to be O(n), i.e. highly inefficient. 
 For any serious code it is a disaster
Ladislav:
15-Nov-2011
Regarding CASE and backtracking: it is not a problem when the effect 
of the keyword is limited to the nearest enclosing block.
BrianH:
15-Nov-2011
Backtracking often happens within blocks too, but yes, that does 
limit the scope of the problems caused (it doesn't eliminate the 
problem, it just limits its scope). Mode operations also don't interact 
well with flow control operations like OPT, NOT and AND. What would 
NOT CASE mean if CASE has effect on subsequent code without being 
tied to it? As a comparison, NOT CASE "a" has a much clearer meaning.
Gregg:
15-Nov-2011
I like the idea of a CASE option. There haven't been many times I've 
needed it, but a few. Other things are higher on my priority list 
for R3, but I wouldn't complain if this made its way in there.
Endo:
1-Dec-2011
I want to keep the digits and remove all the rest,

t: "abc56xyz" parse/all t [some [digit (prin "d") | x: (prin "." 
remove x)]] print head t

this do the work but never finish. If I add a "skip" to the second 
part the result is "b56y".
How do I do?
Geomol:
1-Dec-2011
Alternative not using parse:

>> t: "abc56xyz"
== "abc56xyz"
>> non-digit: ""
== ""
>> for c #"a" #"z" 1 [append non-digit c]
== "abcdefghijklmnopqrstuvwxyz"
>> for c #"A" #"Z" 1 [append non-digit c]
== {abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ}
>> trim/with t non-digit
== "56"
Endo:
1-Dec-2011
a bit more clear:

t: "abc56xyz" parse/all t [some [x: non-digit (x: back remove x) 
:x | skip]] head t
Gabriele:
1-Dec-2011
note that copying the whole thing is probably faster than removing 
multiple times. also, doing several chars at once instead of one 
at a time is faster.
Endo:
1-Dec-2011
It depends on the input, but if it's a long text with many multiple 
chars to insert/remove your way will be faster. Thanks
Dockimbel:
1-Dec-2011
Endo: in your first attempt, your second rule in SOME block is not 
making the input advance when the end of the string is reached because 
(remove "") == "", so it enters an infinite loop. A simple fix could 
be:


t: "abc56xyz" parse/all t [any [digit (prin "d") | x: skip (prin 
"." remove x) :x]]


(remember to correctly reset the input cursor when modifying the 
parsed series) 


As others have suggested, they are more optimal ways to achieve this 
trimming.
Ashley:
1-Dec-2011
Anyone written anything to parse csv into an import-friendly stream?

Something like:

a,      b ,"c","d1
d2",a ""quote"",",",

a|b|c|d1^/d2|a "quote"|,|


(I'm trying to load CSV files dumped from Excel into SQLite and SQL 
Server ... these changes will be in the next version of my SQLite 
driver)
BrianH:
2-Dec-2011
I use a TO-CSV function that does type-specific value formatting. 
The dates in particular, to be Excel-compatible. Was about to make 
a LOAD-CSV function - haven't needed it yet.
BrianH:
2-Dec-2011
Here's the R2 version of TO-CSV and TO-ISO-DATE (Excel compatible):

to-iso-date: funct/with [
	"Convert a date to ISO format (Excel-compatible subset)"
	date [date!] /utc "Convert zoned time to UTC time"
] [

 if utc [date: date + date/zone date/zone: none] ; Excel doesn't support 
 the Z suffix
	either date/time [ajoin [

  p0 date/year 4 "-" p0 date/month 2 "-" p0 date/day 2 " "  ; or T

  p0 date/hour 2 ":" p0 date/minute 2 ":" p0 date/second 2  ; or offsets
	]] [ajoin [
		p0 date/year 4 "-" p0 date/month 2 "-" p0 date/day 2
	]]
] [
	p0: func [what len] [ ; Function to left-pad a value with 0
		head insert/dup what: form :what "0" len - length? what
	]
]

to-csv: funct/with [
	"Convert a block of values to a CSV-formatted line in a string."
	[catch]
	data [block!] "Block of values"
] [
	output: make block! 2 * length? data
	unless empty? data [append output format-field first+ data]

 foreach x data [append append output "," format-field get/any 'x]
	to-string output
] [
	format-field: func [x [any-type!]] [case [
		none? get/any 'x [""]

  any-string? get/any 'x [ajoin [{"} replace/all copy x {"} {""} {"}]]
		get/any 'x = #"^"" [{""""}]
		char? get/any 'x [ajoin [{"} x {"}]]
		scalar? get/any 'x [form x]
		date? get/any 'x [to-iso-date x]

  any [any-word? get/any 'x any-path? get/any 'x binary? get/any 'x] 
  [
			ajoin [{"} replace/all to-string :x {"} {""} {"}]
		]
		'else [throw-error 'script 'invalid-arg get/any 'x]
	]]
]


There is likely a faster way to do these. I have R3 variants of these 
too.
BrianH:
2-Dec-2011
Here's a version that works in R3, tested against your example code:
>> a: deline read clipboard://
== {a,      b ,"c","d1
d2",a ""quote"",",",}

>> use [x] [collect [parse/all a [some [[{"} copy x [to {"} any [{""} 
to {"}]] {"} (keep replace/all x {""} {"}) | copy x [to "," | to 
end] (keep x)] ["," | end]]]]]
== ["a" "      b " "c" "d1^/d2" {a ""quote""} "," ""]


But it didn't work in R2, leading to an endless loop. So here's the 
version refactored for R2 that also works in R3

>> use [value x] [collect [value: [{"} copy x [to {"} any [{""} to 
{"}]] {"} (keep replace/all any [x ""] {""} {"}) | copy x [to "," 
| to end] (keep any [x ""])] parse/all a [value any ["," value]]]]
== ["a" "      b " "c" "d1^/d2" {a ""quote""} "," ""]


Note that if you get the b like "b" then it isn't CSV compatible, 
nor is it if you escape the {""} in values that aren't themselves 
escaped by quotes. However, you aren't supposed to allow newlines 
in values that aren't surrounded by quotes, so you can't do READ/lines 
and parse line by line, you have to parse the whole file.
BrianH:
2-Dec-2011
That operation would be a great thing to add to the R3 Parse Proposals 
:)
BrianH:
2-Dec-2011
I copied Ashley's example data into a file and checked against several 
commercial CSV loaders, including Excel and Access. Same results 
as the parsers above.
Endo:
2-Dec-2011
BrianH: I tested parsing csv (R2 version) there is just a little 
problem with space between coma and quote:


parse-csv: func [a][ use [value x] [collect [value: [{"} copy x [to 
{"} any [{""} to {"}]] {"} (keep replace/all any [x ""] {""} {"}) 
| copy x [to "," | to end] (keep any [x ""])] parse/all a [value 
any ["," value]]]]]

parse-csv {"a,b", "c,d"}  ;there is space after coma
== ["a,b" { "c} {d"}]   ;wrong result.


I know it is a problem on CSV input, but I think you can easily fix 
it and then parse-csv function will be perfect.
Ashley:
2-Dec-2011
Also this case:

	{"a,b" ,"c,d"} ; space *before* comma

This case

	"a, b"


can be dealt with by replacing "keep any" with "keep trim any" ... 
but Brian's func handles 95% of the real-life test cases I've thrown 
at it so far, so a big thanks from me.
Endo:
2-Dec-2011
These are also a bit strange:
>> parse-csv {"a", "b"}
== ["a" { "b"}]
>> parse-csv { "a" ,"b"}
== [{ "a" } "b"]
>> parse-csv {"a" ,"b"}
== ["a"]
BrianH:
2-Dec-2011
If there is a space after the comma and before the ", the " is part 
of the value. The " character is only used as a delimiter if it is 
directly next to the comma.
BrianH:
2-Dec-2011
My func handles 100% of the CSV standard - http://tools.ietf.org/html/rfc4180
- at least for a single line. To really parse CSV you need a full-file 
parser, because you have to consider that newlines in values surrounded 
by quotes are counted as part of the value, but if the value is not 
surrounded completely by quotes (including leading and trailing spaces) 
then newlines are treated as record separators.
BrianH:
2-Dec-2011
CSV is not supposed to be forgiving of spaces around commas. Even 
the "" escaping to get a " character in the middle of a " surrounded 
value is supposed to be turned off when the comma, beginning of line, 
or end of line have spaces next to them.
BrianH:
2-Dec-2011
For the purposes of discussion I'll put the CSV data inside {}, so 
you can see the ends, and the results in a block of line blocks.

This: { "a" }
should result in this: [[{ "a" }]]

This: { "a
b" }
should result in this: [[{ "a}] [{b" }]]

This: {"a
b"}
should result in this: [[{a
b}]]

This: {"a ""b"" c"}
should result in this: [[{a "b" c}]]

This: {a ""b"" c}
should result in this: [[{a ""b"" c}]]

This: {"a", "b"}
should result in this: [["a" { "b"}]]
Gregg:
2-Dec-2011
load-csv: func [
    "Load and parse a delimited text file."
    source [file! string!]
    /with
        delimiter
    /local lines
][
    if not with [delimiter: ","]

    lines: either file? source [read/lines source] [parse/all source 
    "^/"]
    remove-each line lines [empty? line]
    if empty? lines [return copy []]
    head forall lines [
        change/only lines parse/all first lines delimiter
    ]
]
Gregg:
2-Dec-2011
I did head down the path of trying to handle all the things REBOL 
does wrong with quoted fields and such, but I have always found a 
way to avoid dealing with it.
Ashley:
2-Dec-2011
load-csv fails to deal with these 3 simple (and for me, common) cases:

1,"a
b"
2,"a""b"
3,

>> load-csv %test.csv
== [["1" "a"] [{b"}] ["2" "a" "b"] ["3"]]

I've reverted to an in situ brute force approach:

c: make function! [data /local s] [
		all [find data "|" exit]
		s: false
		repeat i length? trim data [
			switch pick data i [
				#"^""	[s: complement s]
				#","	[all [not s poke data i #"|"]]
				#"^/"	[all [s poke data i #" "]]
			]
		]
		remove-each char data [char = #"^""]

  all [#"|" = last data insert tail data #"|"]	; only required if we're 
  going to parse the data
		parse/all data "|^/"
]

which has 4 minor limitations:

1) the data can't contain the delimter you're going to use ("|" in 
my case)

2) it replaces quoted returns with another character (" " in my code)

3) it removes all quote (") characters (to allow SQLite .import and 
parse/all to function correctly)
4) Individual values are not trimmed (e.g.c "a ,b" -> ["a " "b"])


If you can live with these limitations then the big benefit is that 
you can omit the last two lines and have a string that is import 
friendly for SQLite (or SQL Server) ... this is especially important 
when dealing with large (100MB+) CSV files! ;)
BrianH:
2-Dec-2011
I'm working on a fully standards-compliant full-file LOAD-CSV - actually 
two, one for R2 and one for R3. Need them both for work. For now 
I'm reading the entire file into memory before parsing it, but I 
hope to eventually make the reading incremental so there's more room 
in memory for the results.
Ashley:
3-Dec-2011
Actually, 4) above is easily solved by adding an additional switch 
case:

	#" "	[all [not s poke data i #"^""]]

This will ensure "a , b" -> ["a" "b"]
BrianH:
3-Dec-2011
But it doesn't assure that "a , b" -> ["a " " b"]. It doesn't work 
if it trims the values.
Ashley:
3-Dec-2011
it doesn't work if it trims the values.

 - that may not be the standard, but when you come across values like:

	1, 2, 3


the intent is quite clear (they're numbers) ... if we retained the 
leading spaces then we'd be treating these values (erroneously) as 
strings. There's a lot of malformed CSV out there! ;)
BrianH:
3-Dec-2011
I'm putting LOAD-CSV in the %rebol.r of my dbtools, treating it like 
a mezzanine. That's why I need R2 and R3 versions, because they use 
the same %rebol.r with mostly the same functions. My version is a 
little more forgiving than the RFC above, allowing quotes to appear 
in non-quoted values. I'm making sure that it is exactly as forgiving 
on load as Excel, Access and SQL Server, resulting in exactly the 
same data, spaces and all, because my REBOL scripts at work are drop-in 
replacements for office automation processes. If anything, I don't 
want the loader to do value conversion because those other tools 
have been a bit too presumptuous about that, converting things to 
numbers that weren't meant to be. It's better to do the conversion 
explicitly, based on what you know is supposed to go in that column.
Kaj:
3-Dec-2011
Sounds like a job for a dialect that specifies what is supposed to 
be in the columns
Gregg:
3-Dec-2011
As far as standards compliance, I didn't know there was a single 
standard. ;-)
BrianH:
3-Dec-2011
There's an ad-hoc defacto standard, but it's pretty widely supported. 
I admit, the binary support came as a bit of a surprise :)
BrianH:
3-Dec-2011
Here's the R2 version, though I haven't promoted the emitter to an 
option yet:

load-csv: funct [

 "Load and parse CSV-style delimited data. Returns a block of blocks."
	[catch]
	source [file! url! string! binary!]
	/binary "Don't convert the data to string (if it isn't already)"
	/with "Use another delimiter than comma"
	delimiter [char! string! binary!]
	/into "Insert into a given block, rather than make a new one"
	output [block!] "Block returned at position after the insert"
] [
	; Read the source if necessary
	if any [file? source url? source] [throw-on-error [
		source: either binary [read/binary source] [read source]
	]]
	unless binary [source: as-string source] ; No line conversion
	; Use either a string or binary value emitter
	emit: either binary? source [:as-binary] [:as-string]
	; Set up the delimiter
	unless with [delimiter: #","]

 valchar: remove/part charset [#"^(00)" - #"^(FF)"] join crlf delimiter
	; Prep output and local vars
	unless into [output: make block! 1]
	line: [] val: make string! 0
	; Parse rules
	value: [
		; Value surrounded in quotes
		{"} (clear val) x: to {"} y: (insert/part tail val x y)
		any [{"} x: {"} to {"} y: (insert/part tail val x y)]
		{"} (insert tail line emit copy val) |
		; Raw value
		x: any valchar y: (insert tail line emit copy/part x y)
	]
	; as-string because R2 doesn't parse binary that well
	parse/all as-string source [any [
		end break |
		(line: make block! length? line)
		value any ["," value] [crlf | cr | lf | end]
		(output: insert/only output line)
	]]
	also either into [output] [head output]
		(source: output: line: val: x: y: none) ; Free the locals
]


All my tests pass, though they're not comprehensive; maybe you'll 
come up with more. Should I add support for making the row delimiter 
an option too?
BrianH:
3-Dec-2011
>> load-csv {^M^/" a""", a""^Ma^/^/}
== [[""] [{ a"} { a""}] ["a"] [""]]
>> load-csv/binary to-binary {^M^/" a""", a""^Ma^/^/}
== [[#{}] [#{206122} #{20612222}] [#{61}] [#{}]]
BrianH:
4-Dec-2011
The one above misses one of the Excel-like bad data handling patterns. 
Plus, I've added a few features, like multi-load, more option error 
checking , and R3 versions. I'll post them on REBOL.org today.
BrianH:
5-Dec-2011
Making the end-of-line delimiter an option turned out to be really 
tricky, too tricky to be worth it. The code and time overhead from 
just processing the option itself was pretty significant. It would 
be a better idea to make that kind of thing into a separate function 
which requires the delimiters to be specified, or a generator that 
takes a set of delimiters and generates a function to handle that 
specific set.
Henrik:
5-Dec-2011
Well, now, Brian, this looks very convenient. :-) I happen to be 
needing a better CSV parser, than the one I have here, but it needs 
to not convert cell values away from string, and I also need to parse 
partially, or N number of lines. Is this possible with this one?
BrianH:
5-Dec-2011
It doesn't do conversion from string (or even from binary with LOAD-CSV/binary). 
This doesn't have a /part option but that is a good idea, especially 
since you can't just READ/lines a CSV file because it treats newlines 
differently depending on whether the value is in quotes or not. If 
you want to load incrementally (and can break up the lines yourself, 
for now) then LOAD-CSV supports the standard /into option.
Henrik:
5-Dec-2011
since you can't just READ/lines a CSV file
 - yes, mine does that, and that's no good.
BrianH:
5-Dec-2011
Yes, that is a possibility, but there yet. Resuming would be a problem 
because you'd have to either save a continuation position or reparse. 
Maybe something like LOAD/next would work here, preferably like the 
way R3's LOAD/next was before it was removed in favor of TRANSCODE/next. 
Making the /into option work with /next and /part would be interesting.
Henrik:
5-Dec-2011
I don't really need anything but having the ability to parse the 
first 100 lines of a file and doing that many times, so I don't care 
so much about continuation. This is for real-time previews of large 
CSV files (> 10000 lines).
BrianH:
5-Dec-2011
Which do you prefer as a /next style?
	set [output data] load-csv/into data output
or
	output: load-csv/into/next data output 'data
BrianH:
5-Dec-2011
The latter makes chaining of the data to other functions easier, 
but requires a variable to hold the continuation; however, you usually 
use a variable for that anyway. The former makes it easier to chain 
both values (and looks nicer to R2 fans), but the only function you 
normally chain both values to is SET, so that's of limited value.
BrianH:
6-Dec-2011
http://www.rebol.org/view-script.r?script=csv-tools.rupdated, with 
the new LOAD-CSV /part option.

The LOAD-CSV /part option takes two parameters:
- count: The maximum number of decoded lines you want returned.

- after: A word that will be set to the position of the data after 
the decoded portion, or none.


If you are loading from a file or url then the entire data is read, 
and after is set to a position in the read data. If you are converting 
from binary then in R2 after is set an offset of an as-string alias 
of the binary, and in R3 after is set to an offset of the original 
binary. R3 does binary conversion on a per-value basis to avoid having 
to allocate a huge chunk of memory for a temporary, and R2 just does 
string aliasing for the same reason. Be careful to expect that if 
you are passing the value assigned to after to anything else than 
LOAD-CSV (which can handle it either way).
BrianH:
6-Dec-2011
I was a little concerned about making /part take two parameters, 
since it doesn't anywhere else, but the only time you need that continuation 
value is when you do /part, and you almost always need it then. Oh 
well, I hope it isn't too confusing :)
BrianH:
6-Dec-2011
This pass-by-word convention is a little too C-like for my tastes. 
If only we had multivalue return without overhead, like Lua and Go.
ChristianE:
7-Dec-2011
Do you consider LOAD-CSV { " a " , " b " , " c " } yielding [[{ " 
a " } { " b " } { " c " }]] to be on spec? It says that spaces are 
part of a field's value, yet it states that fields may be enclosed 
in double quotes. I'd rather expected [[" a " " b " " c "]] as a 
result. The way it is, LOAD-CSV in such cases parses unescaped double 
quotes as part of the value, IMHO that's not conforming with the 
spec.
BrianH:
7-Dec-2011
I considered making a /strict option to make it trigger errors in 
that case, but then reread the RFC and checked the behavior again, 
and realized that noone took the spec that strictly. Most tools either 
behave exactly the same as my LOAD-CSV (because that's how Excel 
behaves), or completely fail when there are any quotes in the file, 
like PARSE data "," and PARSE/all data ",".
Sunanda:
8-Dec-2011
Debugging some live code here .... I wasn't expecting 'parse to drop 
the last space in the second case here:
        parse/all " a" " "
        == ["" "a"]
       parse/all " a " " "
       == ["" "a"]
So after the parse, it seems that " a" = " a "

Any thoughts on a quick work around? Thanks!
PeterWood:
8-Dec-2011
Very crudely adding an additional space if the last character is 
space:

>> s: " a "                      
  
== " a "

>> if #" " = last s [append s " "
]
== " a  "

>> parse/all s " 

== [
 
a" ""]
Henrik:
18-Dec-2011
BrianH, testing csv-tools.r now.

Is this a bug?:

>> to-iso-date 18-Dec-2011/14:57:11
** Script Error: Invalid path value: hour
** Where: ajoin
** Near: p0 date/hour 2 ":" p0
>> system/version
== 2.7.8.3.1
BrianH:
18-Dec-2011
Yeah, blocks for cells are so far outside the data model of everything 
else that uses CSV files that TO-CSV was written to assume that you 
forgot to put an explicit translation to a string or binary in there 
(MOLD, FORM, TO-BINARY), or more likely that the block got in there 
by accident. Same goes for functions and a few other types.
BrianH:
18-Dec-2011
As for that TO-ISO-DATE behavior, yes, it's a bug. Surprised I didn't 
know that you can't use /hour, /minute and /second on date! values 
with times in them in R2. It can be fixed by changing the date/hour 
to date/time/hour, etc. I'll update the script on REBOL.org.
GrahamC:
18-Dec-2011
dunno if it's faster but to left pad days and months, I add 100 to 
the value and then do a next, followed by a form ie. regarding you 
p0 function
BrianH:
20-Dec-2011
Added a TO-CSV /with delimiter option, in case commas aren't your 
thing. It only specifies the field delimiter, not the record delimiter, 
since TO-CSV only makes CSV lines, not whole files.
Endo:
20-Dec-2011
I'm using it to prepare data to bulk insert into a SQL Server table 
using BCP command line tool.

I need to make some changes like /no-quote to not quote string values. 
Because there is no option in BCP to tell my data has quoted string 
values.
BrianH:
20-Dec-2011
Be careful, if you don't quote string values then the character set 
of your values can't include cr, lf or your delimiter. It requires 
so many changes that it would be more efficient to add new formatter 
functions to the associated FUNCT/with object, then duplicate the 
code in TO-CSV that calls the formatter. Like this:

to-csv: funct/with [
	"Convert a block of values to a CSV-formatted line in a string."
	data [block!] "Block of values"

 /with "Specify field delimiter (preferably char, or length of 1)"
	delimiter [char! string! binary!] {Default ","}
	; Empty delimiter, " or CR or LF may lead to corrupt data
	/no-quote "Don't quote values (limits the characters supported)"
] [
	output: make block! 2 * length? data
	delimiter: either with [to-string delimiter] [","]
	either no-quote [
		unless empty? data [append output format-field-nq first+ data]

  foreach x data [append append output delimiter format-field-nq :x]
	] [
		unless empty? data [append output format-field first+ data]
		foreach x data [append append output delimiter format-field :x]
	]
	to-string output
] [
	format-field: func [x [any-type!] /local qr] [

  ; Parse rule to put double-quotes around a string, escaping any inside

  qr: [return [insert {"} any [change {"} {""} | skip] insert {"}]]
		case [
			none? :x [""]
			any-string? :x [parse copy x qr]
			:x = #"^(22)" [{""""}]
			char? :x [ajoin [{"} x {"}]]
			money? :x [find/tail form x "$"]
			scalar? :x [form x]
			date? :x [to-iso-date x]

   any [any-word? :x binary? :x any-path? :x] [parse to-string :x qr]
			'else [cause-error 'script 'expect-set reduce [

    [any-string! any-word! any-path! binary! scalar! date!] type? :x
			]]
		]
	]
	format-field-nq: func [x [any-type!]] [
		case [
			none? :x [""]
			any-string? :x [x]
			money? :x [find/tail form x "$"]
			scalar? :x [form x]
			date? :x [to-iso-date x]
			any [any-word? :x binary? :x any-path? :x] [to-string :x]
			'else [cause-error 'script 'expect-set reduce [

    [any-string! any-word! any-path! binary! scalar! date!] type? :x
			]]
		]
	]
]


If you want to add error checking to make sure the data won't be 
corrupted, you'll have to pass in the delimiter to format-field-nq 
and trigger an error if it, cr or lf are found in the field data.
BrianH:
20-Dec-2011
Nope, that's a bug in the R2 version only. Change this:
			:x = #"^(22)" [{""""}]
to this:
			:x == #"^(22)" [{""""}]

Another incompatibility between R2 and R3 that I forgot :(
I'll update the script on REBOL.org.
BrianH:
20-Dec-2011
Note that that was a first-round mockup of the R3 version, Endo. 
If you want to make an R2 version, download the latest script and 
edit it similarly.
BrianH:
20-Dec-2011
Have you looked into the native type formatting of bcp? It might 
be easier to make a more precise data file that way.
Endo:
20-Dec-2011
It uses a format file, it is very strict, but no chance to set a 
quote char for fields.
BrianH:
20-Dec-2011
I figure it might be worth it (for me at some point) to do some test 
exports in native format in order to reverse-engineer the format, 
then write some code to generate that format ourselves. I have to 
do a lot of work with SQL Server, so it seems inevitable that such 
a tool will be useful at some point, or at least the knowledge gained 
in the process of writing it.
Endo:
20-Dec-2011
I'm working with SQL Server for a long time, if anything I can help 
or test for you, feel free to ask if you need.
Group: Core ... Discuss core issues [web-public]
Geocaching:
17-Mar-2011
the statement a: "" is useless... You could obtain the same behaviour 
without repeating a: ""
Geocaching:
17-Mar-2011
???
>>  my-code-a: [a: [] append a 'x]
== [a: [] append a 'x]
>> do my-code-a
== [x]
>> do my-code-a
== [x x]
>> a: []
== []
>> a
== []
>> head a
== []
>> do my-code-a
== [x x x]
>> a
== [x x x]


what is the logic behind this? How could a be empty after a: [] and 
be filled will three 'x after just one call to do my-code-a
Ladislav:
17-Mar-2011
:-D you need to consult the MY-CODE-A block contents
Rebolek:
17-Mar-2011
Because oit's different A
Geocaching:
17-Mar-2011
Rebolek: both calls to 'a are in the same context (root context) 
How could we have two different 'a in the same context?
Ladislav:
17-Mar-2011
it's like

a: []
a: [1 2 3]


how come? that after no insert at all I get three elements in an 
(originally empty block? ;-)
Geocaching:
17-Mar-2011
Sorry... i do not understand...
 
>> a
== []

>> a
== [x x x]
Ladislav:
17-Mar-2011
>> my-code-a: [a: [] append a 'x]
== [a: [] append a 'x]
>> do my-code-a
== [x]
>> do my-code-a
== [x x]
>> a: []
== []
>> my-code-a
== [a: [x x] append a 'x]
>> do my-code-a
== [x x x]
>> a
== [x x x]
Rebolek:
17-Mar-2011
The first A is defined inside MY-CODE-A block. And because you do 
not use copy [], it's not rewritten every time you call MY-CODE-A 
block.
Geocaching:
17-Mar-2011
OK... it is a quiestion of pointer assignement in some way...
Ladislav:
17-Mar-2011
why pointer? by doing my-code: [a: [x x]] I just assign the block 
that is the second in MY-CODE to A
Ladislav:
17-Mar-2011
same? a second my-code
== true
Geocaching:
17-Mar-2011
It looks to me like everytime you call my-code-a, you assign the 
block defined in my-code-a to the variable a which is globally accessible. 
When you write a: [] outside my-code-a, you assigne another block 
to a... a is a pointer and you swith the adresse it is pointed to.

>> my-code-a: [a: [] append a 'x]
== [a: [] append a 'x]
>> do my-code-a
== [x]
>> a
== [x]
>> a: copy []
== []
>> append a 'y
== [y]
>> a
== [y]
>> do my-code-a
== [x x]
>> a
== [x x]
Ladislav:
17-Mar-2011
To explain it even futher, here is yet another example:

>> my-code-c: [a: []]
== [a: []]
>> do my-code-c
== []
>> append a 'x
== [x]
>> my-code-c
== [a: [x]]
Ladislav:
17-Mar-2011
Example prolonged:

>> my-code-c: [a: []]
== [a: []]
>> do my-code-c
== []
>> append a 'x
== [x]
>> my-code-c
== [a: [x]]
>> a: []
== []
>> my-code-c
== [a: [x]]
Ladislav:
17-Mar-2011
An even longer one:

>> my-code-c: [a: []]
== [a: []]
>> my-code-d: [a: []]
== [a: []]
>> do my-code-c
== []
>> append a 'a
== [a]
>> my-code-c
== [a: [a]]
>> my-code-d
== [a: []]
>> do my-code-d
== []
>> append a 'c
== [c]
>> my-code-c
== [a: [a]]
>> my-code-d
== [a: [c]]
Gregg:
17-Mar-2011
you assign the block defined in my-code-a to the variable a


In addition to not thinking in terms of pointers, the REBOLish view 
of the above would be "You set the word a to refer to the block defined 
in my-code-a"; words refer to values, values are not assigned to 
words. An important distinction.
Ladislav:
18-Mar-2011
I would like to disagree in this case. Not that it is important, 
but, in my opinion, the expression

    a: []


can be considered "assignment", whatever that means. In that sense, 
the value is "assigned" to a variable.
Ladislav:
18-Mar-2011
What is more curious in the "You set the word a to refer to the block 
defined in my-code-a" is the word "defined". The truth is, that MY-CODE-A 
is a block, that is created by the LOAD function at (roughly) the 
same time its contents, including the above mentioned subblock, come 
into existence.
Ladislav:
18-Mar-2011
What I wanted to tell was, that while the source string defined the 
subblock, as well as the MY-CODE-A block, the MY-CODE-A block only 
happens to actually contain (refer to) it.
Dockimbel:
19-Mar-2011
Just spent the last hour searching for the cause of an issue in my 
code. It appears to be a native LOAD bug, as shown by the code below:

>> length? probe load "[<] b [>]"
[<] b [>]
== 1
Dockimbel:
19-Mar-2011
Seems that there's a shorter form that has the same issue: "[<][>]""[<][>]"
Dockimbel:
19-Mar-2011
Just added a ticket in RAMBO, now need to find a workaround.
BrianH:
19-Mar-2011
Yeah, I remember that one from a couple years ago. The arrow words 
are special cased, and in R2 the tag type is recognized with higher 
priority unless there is a space afterwards.
Dockimbel:
19-Mar-2011
I've tested both with INSERT and WRITE-IO, same result. But I've 
used CALL/OUTPUT on the test CGI script to simulate a call from a 
webserver.
Dockimbel:
19-Mar-2011
Then the error should show up less frequently, it would need #{0D0A} 
in the data to produce a corruption.
Dockimbel:
19-Mar-2011
I was wondering if it could be related to a CALL issue, but my own 
CALL.r implementation in Cheyenne shows the same result. So I guess 
it's definitely an internal CGI handling issue.
PeterWood:
19-Mar-2011
I came across the issue as I was trying to run REBOL/Services under 
Cheyenne in GCI mode. I have found that 0x0D bytes get changed to 
0x0A, it doesn't matter what they are preceded or followed by.

I also found that 0x0D0D gets converted to a single 0x0A.
PeterWood:
20-Mar-2011
I suspect that the problem is more likely to be with 'call than REBOL 
in CGI mode as REBOL/Services  runs as a CGI under Xitami on Windows.

The problem does not occur on OS X.
PeterWood:
20-Mar-2011
I have run a test which seems to show that the problem lies with 
'call.
PeterWood:
20-Mar-2011
First I ran a small command line pgm:
PeterWood:
20-Mar-2011
This is the console output from the command line pgm:

C:\REBOLServicesTest>cr
)haracter 13 is enclosed in the parentheses (


I then checked that the command line pgm could be successfully called 
with the following two lines of Ruby:

	puts %x{cr}
	print %x{cr}.dump

Which gave the following output:
C:\REBOLServicesTest>ruby call_test.rb
)haracter 13 is enclosed in the parentheses (
Character 13 is enclosed in the parentheses (\r)

I then called the command line pgm from a REBOL Console session:

>> call/console "cr"
Character 13 is enclosed in the parentheses (
)== 0
>> print to-binary {Character 13 is enclosed in the parentheses (
{    )}
#{
43686172616374657220313320697320656E636C6F73656420696E2074686520
706172656E74686573657320280A29
}
>> buffer: make string! 256
== ""
>> call/output "cr" buffer
== 0
>> probe buffer
Character 13 is enclosed in the parentheses (^/)
== "Character 13 is enclosed in the parentheses (^/)"
>> print to-binary buffer
#{
43686172616374657220313320697320656E636C6F73656420696E2074686520
706172656E74686573657320280A29
}


As you can see both call/console and call/output turned the 0x0D 
into a 0x0A.
Dockimbel:
21-Mar-2011
I concur, it's a CALL issue and not a --cgi one. I did more tests 
with my own CALL/OUTPUT implementation and it doesn't show any newline 
alteration in the binary CGI output.
Henrik:
24-Mar-2011
hmm.. never mind. seems to be a memory problem.
Oldes:
25-Mar-2011
I guess this is a bug in R2's lexer:
>> 2#
== ##
>> 4#foo
== ##foo
>> 456457#foo
== #56457#foo
61501 / 6460812345...614615[616] 617618...643644645646647