• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary


results window for this page: [start: 1 end: 100]

world-name: r4wp

Group: Rebol School ... REBOL School [web-public]
Doesn't sound like you need RebDB. You could just do the operation 
on an Excel export such as CSV format
Try using Gregg's perfect REBOL Excel Control Dialect: http://www.robertmuench.ch/development/projects/excel/dialect_documentation/

Also look at Brian's csv-tools on rebol.org:

Arnold, you can also take a look at an .xml file that Excel produces 
and see how that is configured. I've had better success with xml 
files than csv (though I use those as well) since you can add all 
kinds of formatting with XML.
It is what I like about this community :)

I knew that when I write a RLE function, BrianH will come up a much 
better version. Doc and others joined as well and now we have a very 
good function. Just like the CSV tools. Thanks.

world-name: r3wp

Group: All ... except covered in other channels [web-public]
Of course, it's no where near so difficult if you can export a CSV 
file from Excel first.
Actually, the refinement is /CSV -- but your wish has, I think, been 
taken into account.

You can grow a CSV file one row per run of application-sizer.r to 
gather lots of metrics.
All we now need to do is work out what those metrics mean!
Sure he is free to publish anywhere he likes, but it is the first 
place to look at, for me and most of the people (beginners especially).

I have tons of dead links to REBOL pages & scripts moved somewhere 
else. And also it is more difficult to have new versions of those 
script not on rebol.org

For example BrianH's csv-tools.r is a great script, useful for most 
people. But I'm sure if it wasn't on the rebol.org many people would 
try to re-invent the wheel. Anyway, just an idea.
Group: Core ... Discuss core issues [web-public]
Anyone got a csv parser that works with embedded quotes?
graham: i think mine dies because of the wrong quoting, not empty 
fields, but i haven't tested it. i would actually be surprised to 
see a csv parser that handles quoting that way, because the only 
way to handle them is to ignore them.
Thanks, Pekr, DocKimbel would be indeed the person to ask because 
of his work on the mySQL drivers. After all, a directory (on which 
LDAP focuses)  is nothing but a specialized database. I'm not sure 
what my needs are because I just started playing with it. Maybe Rebol 
can assist in the conversion of external sources (CSV for example) 
to ldif format and possibly populating the directory automatically.
I am once again amazed with REBOL - I feel like king when working 
with csv. My parse/all item ";" is common idiom :-)
I wonder if we could have iteration via two blocks? I have two blocks 
- first with field names, second with values. I could write some 
map function and put it into object, but field names have spaces, 
so I did something like following:

stock: read/lines %stav-zasob.csv

forskip stock 2 [

  fields: parse/all stock/1 ";"
  values: parse/all stock/2 ";"

  i: 1
  foreach name fields [
   print [name ": " value/:i]
   i: i + 1

 ask "Press any key to continue ..." 

I need to evaluate our phone provider CSV data for one year. And 
they mix cell phone and normal phone on the invoice, so I need to 
go down and download details. And as it is pread across per month 
files, I want to do simple counter in REBOL. With read/lines ... 
and subsequent foreach line data [ ... parse line "," .....] REBOL 
is really cool to do such data manipulations ...
sorry if I will propose a nonsense, or if the solution already exists, 
but - when using REBOL for data extraction (using parse) and forming 
a block or CSV as a result, I often come to the need of append-only-if-the-item-does-not-exist-already, 
so using following idiom:

if not found? find target value [append target value]

What about adding /not refinement (or other name), so that I could 
append only unique values?
I give up. I have a routine that creates CSV files but I am stuck 
with how change what is an 0D 0A  to just an 0A  when it comes to 
setting up the actual CSV text file.  Anyone know?
Group: I'm new ... Ask any question, and a helpful person will try to answer. [web-public]
Hi all,  As you may have guessed from my above posts, I'm trying 
to write a script that will convert a formatted report into a table 
or CSV.  I'm new and just playing around to understand the process. 

In any event, I did search rebol.org on CSV and found the CSV.r script 
which seems to a part of what I would like to do.

But here is my concern.  The Mold-CSV function does not handle all 
the different kinds of strings that can occur.  I'm talking about 
embedded " or {.  I would like a function that can handle all strings 
properly into CSV.

Take this example:
 In-block:	[
	[	"C A1"	"C B1"	] 
	[	2		3		] 
	[	"2"		"3"		] 
	[	"4a"    	"4b"	]
	[	"5a x" 	"5b y"	]
	[	""7a,""	""7b,""	]
	[	""a8"""	""b8"""	]	

Mold-CSV In-block doesn't handle 7a or a8 lines properly since the 
"" terminates a string.  You could replace the first and last " with 
a brace {} but that does some funny things too.
Getting better, but still no cigar.

Here is my test code for Mold-CSV function in CSV.r script:  ( I 
hope this formats correctly on Altme)

In-block:	[					; (want in csv file)	; (want in excel) 
["Col A1"		"Col B1"		]	; Col A1,Col B1	; Col A1	Col B1
[2			3			]	; 2,3			;      2	     3
["'3"			"'4"			]	; '3,'4			; '3		'4
["4a"			"4b"			]	; 4a,4b			; 4a		4b

["^"5a^"^"^"^""		"^"^"^"5b^"^""		]	; "5a""""","""5b"""	; 5a""		"5b" 
["6a x"			"6b y"			]	; 6a x,6b y		; 6a x		6b y
["7a, x"			"7b,^"^" y"		]	; "7a, x","7b,"" y"""	; 7a, x		7b," y"

["^"8a ^"^",x"		"^"8b ^"^"^"^"y^""	]	; "8a "",x","8b """" y"	; 8a 
",x	8b "" y

["^"^"^"9a^"^" x"		"^"9b ^"^"y^"^"^""	]	; """9a"" x","9b ""y"""	; 
"9a" x	9b "y" 

 Out-block:	Mold-CSV In-block
 write %Book2.csv Out-block

In the above, I have 3 "views" if you will of what I am after.

The first view is the In-block that I would like Mold-CSV to run 

The second commented view is what I need Mold-CSV to generate to 
put into the csv file

The third commented view is what Microsoft Excel will generate when 
I open the CSV file.

Mold-CSV works fine for the first 6 lines, then it gives me this 
for lines 7,8 and 9:

7a, x
,{7b,"" y}		<-- Where did the braces come for for 7b?
{"8a "",x},"8b """"y"	<-- Same quest for 8a?

,"9b ""y"""	ok

Any ideas on how to solve this?
In short, Rebol's molding of strings can be in two different formats 
depending on the input string, and depending on where you get your 
input string from, it can be hard to guess which one it is.

I would suggest to make your own Excel/CSV-string-molding function 
to ensure you have double-quotes as expected.

Other people have come to this exact same problem before, by the 
I don't understand the exact logic of the CSV file quote formatting, 
but you could use a function similar to this:
	enquote: func [string][rejoin [{"} string {"}]]
>> print enquote "hel^"lo"

and you could take it further by replacing single instances of " 
in the string with two instances, etc.


Mold-CSV just uses MOLD, so you should replace it with your own MOLD-CSV-STRING 
function, similar to the above enquote.
Anton, Is MOLD-CSV-STRING different than the MOLD-CSV in the above 
link?  I don't see it inside of that script.
mold-csv-string: func [string][rejoin [{"} replace/all copy string 
{"} {""} {"}]]
MOLD-CSV uses rebol's built-in MOLD to mold the string. I am proposing 
to replace MOLD with the above one-liner MOLD-CSV-STRING function.
Just replace the two instances of MOLD with MOLD-CSV-STRING.
Wait a minute ! MOLD handles any value, while MOLD-CSV-STRING only 
handles series at the moment... hang on.
Anton,  What do you think of this approach -- I'm just thinking it 
through and am not sure if I have covered all the bases.

Since my input can contain any number of symbols, commas, single 
and double quotes, and rarely, but possibly braces, what if I attack 
the problem a different way.  Whenever an embedded comma, or double 
quote or something like that occurs within an spreadsheet cell, it 
will require some kind of "extra" formatting like two quotes or the 
like.  It may even be that there are unique combinations of such 
symbols, rare as that would be, to have complex formatting of an 
input block for rebol to convert it properly for CSV.

What if I shift gears and look at a TAB delimited file instead.  
I know that I will never have TAB embedded in my cells, and that 
I deal with the entire block as a series instead.  I could embed 
TAB wherever needed to separate the columns and leave the remaining 
string the way it is.  Would that work, or would I still need to 
do some formatting to handle it.  I think I'll open an excel spreadsheet, 
and work in reverse to see what it needs for TAB delimited file. 
 Any comments?
I think I got it.

mold-csv-string: func [value][append insert replace/all either any-string? 
:value [copy :value][mold :value] {"} {""} {"} {"}]
Anton,  I tried the mold-csv-string and got the following:
Col A1
Col B1
    [2 3] 

} {


6a x

6b y
7a, x



,x} {



} {



Which is not usable for excel  -- Maybe I mis-understood how to use 
My end goal is to be able to take some formatted text of some kind, 
something that is generated by a utility of some kind, and generate 
a spreadsheet from it.  The formatted text can be of any type including 
" and the like.

I'm working in reverse, by creating a spreadsheet in MS excel with 
various kinds of data that I've shown above.  Some data with just 
alpha, just numbers, combinatins, leading quotes, trailing quotes, 
embedded quotes, embedded commas, spaces etc.  Then I saved the spreadsheet 
as CSV and another version as Tab delimited.

Then by looking at those files via notepad or other editor, I can 
see how the data must be in order for MS excel to accept it.  I initially 
had problems with the CSV model because embedded qutoes needs other 
qutoes added to that "cell" if you will.  The Tab delimited model 
has less restrictions on it.  The only thing that needs attention 
is when a "cell" starts with a quote, which needs additional quotes 
added to it.  Embedded qutoes or trailing qutoes don't need any modification.

Long story short -- I'm going with Tab delimited model and figuring 
out a rebol script to take data from an IBM utility dump (with rules 
on what data to capture), and model that info into an excel spreadsheet 
via Tab delimited file.
Hi Gregg -- The cookbook recipe is a good one for reading and processing 
CSV's as input.  My main issue is NOT the CSV part itself.  It is 
pretty simple really.  But as usual MS has some additional formatting 
rules whenever certain characters are embedded, and that is the part 
I'm having trouble with in order for a CSV file to be loaded as a 

You don't happen to have one that lets you write CSV files as output 
for excel (with all the special rules etc)???   :-)
Hey Gregg,  Thanks for the code.  I tried it out and while there 
is a few hicups, I am planning on using that code when I create an 
Excel CSV version in addition to the Tab delimited version.   Thank 
I will. Will there be something standard in R3 to say pull a csv 
file in to blocks?
read/lines is a start to turning CSV into blocks -- you get one block 
entry per record.
I regularly import CSV files into REBOL.

The only trick is that I insist they be tab-delimited rather than 
comma-delimited, and that none of the strings contain tabs. That 
way, we can be sure that tab occurs only as a field delimiter.
The code then is
   data-block: read /lines %csv-data-file.txt
   foreach record data-block [
        record: parse/all record to-string tab
        ....code to handle the 'record block

If you need to handle actual comma separated where strings can also 
contain commas, you'll need to tweak the parse.
Thanks so much Sunanda, a couple of my customers receive stock update 
files as csv's (I don't think we could get them to send tsv's ) but 
that will fill a requirement brilliantly. If I decide to move any 
more complex apps to Rebol I may need to wait till I can get R3/command, 
I believe that will give me odbc (for excel and access) imports etc.
Thanks Gabriele -- that may work for SteveT.

However, there are some anomalies in the way parse handles RFC4180 
formatted CSV files. Particularly if there are embedded double quotes 
in a field.

For my purposes, I find tab delimited files are better -- though 
still not always perfect:

    >> x: parse/all {"a"^- ","","} to-string tab

    == ["a" { ","","}]   ;; 2nd field should be {,",}  -- result is close 
    enough that it can be tidied into that
   >> y: parse/all {"a", ","","} ","
    == ["a" { "} "" ""]    ;; 2nd field mashed irrecoverable!
I am a virtual Prof for an online university. I am in charge of several 
areas. I am using Rebol right now to help with our daily course reports 
I get in csv format.
I take the csv files, convert to RebDB, and then run reports in SQL. 
I am working on an R2 GUI version now. I really hope that R3 GUI 
is going to be functional soon. I could not tell if it was since 
no activity for a while on it. I tried the demo in R3 console and 
it does not work.
Group: Parse ... Discussion of PARSE dialect [web-public]
I have CSV file and I have trouble using parse one-liner. The case 
is, that I export tel. list from Lotus Notes, then I save it in Excel 
into .csv for rebol to run thru. I wanted to use:

foreach line ln-tel-list [append result parse/all line ";"]

... and I expected all lines having 7 elements. However - once last 
column is missing, that row is incorrect, as rebol parse will not 
add empty "" at the end. That is imo a bug ...
I used csv = semicolon separated values and no quotes in-there ...
yes, it is, as I expect all lines having 7 elements ... once there 
is not such an element, I can't loop thru result ... well, one condition 
will probably solve it, but imo it is a gug .... rebol identifies 
;; and puts "" inthere, but csv, at the end, will use "value;", and 
rebol does not count that ...
parse, without a rule, treats quotes specially. this is to allow 
parse to be used directly with things like csv data.
but the true is, that in CSV is logical to have: parse {,d ,d} {,} 
== ["" "d" "d"]
I'm a bit stuck because this parse stop after the first iteration. 
 Can anyone give me a hint as to why it stops after one line.

Here is some code:

data: read to-file Readfile

print length? data

d: parse/all data [thru QuoteStr copy Note to QuoteStr thru QuoteStr 
thru quotestr

    copy Category to QuoteStr thru QuoteStr thru quotestr copy Flag to 
    thru newline (print index? data)]
== false

Data contains hundreds of "memos" in a csv file with three fields: 

 Memo, Category and Flag ("0"|"1")  all fileds are enclosed in quotes 
 and separated by commas.

It would be real simple if the Memo field didn't contain double quoted 
words; then 
parse data none
would even work; but alas many memos contain other "words".
It would even be simple if the memos didn't contain commas, then
parse data "," or parse/all data ","
would work; but alas many memos contain commas in the body.
Gordon: I did not read this thread in a whole but as for converting 
CSV string to/from Rebol blocks, here is some fully functionnal functions 
;***** Conversion function from/to CSV format
csv-to-block: func [

 "Convert a string of CSV formated data to a Rebol block. First line 
 is header."
	csv-data [string!] "CSV data."

 /separator separ [char!] "Separator to use if different of comma 
	/without-header "Do not include header in the result."

 /local out line start end this-string header record value data chars 
 spaces chars-but-space

 ; CSV format information http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm
] [
	out: copy []
	separ: any [separ #","]

 ; This function handle replacement of dual double-quote by quote 
 while copying substring
	this-string: func [s e] [replace/all copy/part s e {""} {"}]
	; CSV parsing rules

 header: [(line: copy []) value any [separ value] (if not without-header 
 [append/only out line])]

 record: [(line: copy []) value any [separ value] (append/only out 

 value: [any spaces data any spaces (append line this-string start 

 data: [start: some chars-but-space end: | #"^"" start: any [some 
 chars | {""} | #"," | newline] end: #"^""]
	chars: complement charset rejoin [ {"} separ newline]
	spaces: charset exclude { ^-} form separ
	chars-but-space: exclude chars spaces
	parse/all csv-data [header any [newline record] any newline end]

block-to-csv: func [
	"Convert a block of blocks to a CSV formated string." 
	blk-data [block!] "block of data to convert"
	/separator separ "Separator to use if different of comma (,)."
	/local out csv-string record value v
] [
	out: copy ""
	separ: any [separ #","]
	; This function convert a string to a CSV formated one

 csv-string: func [val] [head insert next copy {""} replace/all copy 
 val {"} {""} ]
	record: [into [some [value (append out #",")]]]

 value: [set v string! (append out csv-string v) | set v any-type! 
 (append out form v)]

 parse/all blk-data [any [record (remove back tail out append out 
there's also the sql csv thingy in the library
This was I thought a simple task .. to parse a csv file....
this is Gabriele's published parser 

CSV-parser: make object! [ line-rule: [field any [separator field]] 
field: [[quoted-string | string] (insert tail fields any [f-val copy 
""])] string: [copy f-val any str-char] quoted-string: [{"} copy 
f-val any qstr-char {"} (replace/all f-val {""} {"})] str-char: none 
qstr-char: [{""} | separator | str-char] fields: [] f-val: none separator: 
#";" set 'parse-csv-line func [ "Parses a CSV line (returns a block 
of strings)" line [string!] /with sep [char!] "The separator between 
fields" ] [ clear fields separator: any [sep #";"] str-char: complement 
charset join {"} separator parse/all line line-rule copy fields ] 
this might fix Gabriele's parser ..

CSV-parser: make object! [
	line-rule: [field any [separator field]]

 field: [[quoted-string | string] (insert tail fields any [f-val copy 
	string: [copy f-val any str-char] 

 quoted-string: [{"} copy f-val any qstr-char {"} (if found? f-val 
 [ replace/all f-val {""} {"}])]
	str-char: none qstr-char: [{""} | separator | str-char]
	fields: []
	f-val: none
	separator: #";" set 'parse-csv-line func [
		"Parses a CSV line (returns a block of strings)"
		line [string!]
		/with sep [char!] "The separator between fields"
	] [
		clear fields
		separator: any [sep #";"]

  str-char: complement charset join {"} separator parse/all line line-rule 
  copy fields
This may make it easier for some, just exchange the "A"s for "," 
and mentally read it like you would read a csv file:

>> parse/case ",,,BBBaaaBBB,,,aaa" ","
== ["" "" "" "BBBaaaBBB" "" "" "aaa"]
that parse mode was intended to make parsing CSV easier. may not 
work with all the CSV variants though.
it's not a bug - parse without a rule is meant for csv parsing, and 
quotes delimit a field. it's not as useful as it was intended to 
be, but it's intentional behavior. you need to provide your own rule 
if you don't want quotes to be parsed.
tab is the delimiter. " after delimiter (which also means " as first 
char) means that the field is delimited by quotes. as i said, it 
was intended to parse csv files easily, however, i think it gets 
on the way most often than not. there should at least be a refinement 
to disable this. in any case, currently the only way around it is 
using your own rule.
If I remember well, this behaviour is because of CSV parsing - parse 
with delimiters (rules as a string) was designed mainly for that 
Don't know why, but most of the time when parsing CSV structure I 
have to do something like:

parse/all append item ";" ";" 

Simply put, to get all columns, I need to add the last semicolon 
to the input string ...
IIRC carl once said that the simple rule parse was meant to be used 
to parse CSV... so that might explain it.
strangely enough, it makes parsing CSV with quotes much more difficult, 
so I had to work around it.
for proper CSV parsing, we'll need some good functions for R3/Plus 
instead of trying to do some crappy stuff with PARSE directly.
What is your take on simple mode parsing? It is handy for simple 
CSV parsing, and the idiom is common:

parse/all row ";"

The trouble is, that if there is no data in last column, parse mistakenly 
makes the resulting block shorter, so you have to use common idiom:

rec: parse/all append row  ";" ";"

I always wondered, if it could be regarded being a parse bug?
the advantage would be to avoid skipping newlines. now that I think 
of it, you don't want it if you want to parse across a newline, but 
you wouldn't do that for CSV parsing.
I just started talking about this as a general limitation of parse 
that I meed a lot of times and I suppose Paul could of meet it when 
trying to parse CSV
CSV parsing is an issue, because REBOL handles some inputs well, 
but fails for what may be a common way things are formatted. "CSV" 
isn't always as simple as it sounds.
I know parsing csv can be messy ... at least at this high level I 
don't know how to do it with escapes and commas in etc
this is exactly the reason why CSV was it a really fucked up idea. 
comas are there in sentences and multivalued fields, not just numbers.
i always use TSV.
Group: !RebGUI ... A lightweight alternative to VID [web-public]
is there anybody here at the moment who can help me develop a csv 
editor interface(, so i can easily translate the qtask interface 
to hungarian)?
guyyyys, pleaaaaase, when latest changes will be synced? :-) Or are 
you suggesting I should install CSV product? :-)
so when I install some CSV product, I will be able to reach it? What 
product do you suggest?
Graham, "I meant in the trac .. want to see the diff", Browse Source|Revision 
Log ... then click on a specific Rev or Chgset.

Pekr, "guyyyys, pleaaaaase, when latest changes will be synced?" 
... since you asked so nicely I'm doing it now ...

Pekr, "are you suggesting I should install CSV product?" ... only 
if you want to make source code changes

Graham/Anton: "/keep"; an oversight that a number of people (apart 
from Graham) have found annoying
Group: Rebol School ... Rebol School [web-public]
[unknown: 9]:
It also renders JavaScript, and XML, and CSV, and SMS, and Email, 
Well, if you assume that your internal storage method is one which 
just needs to be "converted" to an other, like CSV => XML, you might 
be in for a suprise when trying to model a real time dynamic system 
with Undo like a paint program with a file format as export.

For example, do you store a given object once, with the history of 
the object elsewhere, or do you store the object together, with the 
most recent at the top of the list.

Also, Do you store objects, and actions, or both togther.
make-csv: func [block] [rejoin delimit copy block #","]
Group: Windows/COM Support ... [web-public]
Robert, on a recent project my app creates an xml file formatted 
with xml that Excel understands. It's a hassle but you can make very 
pretty spreadsheets that do just about all the formatting (so it's 
a far cry from CSV). I start with creatinga very small excel spreadsheet 
then saving as an xml file. Then I check out how they do the formatting. 
You can create multiple tabbed spreadsheets very easily this way. 
Doesn't do graphs though.
Group: Tech News ... Interesting technology [web-public]
I'm not.....Google is shuffling terabytes of data with very short 
response times.

XML may be a good archive/interchange format -- a better .CSV format 
-- but it just does not scale for operational systems of the size 
Google has.
Group: SQLite ... C library embeddable DB [web-public].
output to csv, import?
I have a problem when I import a CSV file. I read the file (1.5 -2 
MB), parse it and than write it out to SQLite.

For some records I get scrambeld (invalid strings) in the database. 
Importing the CSV file via other SQLite tools work without any problem.

It looks like the Rebol memory somehow gets messed up. Or could it 
be on the way to the DLL?
Group: !REBOL3-OLD1 ... [web-public]
just conjoin results as CSV and show them in spreadsheet..
conjoin inserts delemiters too, did i get that right? How about a 
simple 'list, i feel its a good match, listing things. But not native 
speaker.. Another idea: it icould be related to csv (this spreadsheet-format, 
got i the letters right), a conjoin is close to exporting?
We were thinking of CSV files when we added the /quoted refinement, 
but the conjoin function could probably be refined to be better at 
dataset export.
Well, REBOL blocks can double as datasets, with either nested blocks 
or fixed-length records. You could probably do a variant on conjoin 
that could convert either of these types to a CSV file, even with 
one that has its records delimited by something other than a comma, 
like a tab. Creating a new function to do this based on the techniques 
in conjoin would currently be easier than using conjoin to perform 
this task.
On the other hand, if you don't want a full copy of the CSV fie in 
memory and would rather just convert to disk, conjoin should work 
just fine. It might be a good idea to add a /part option for fixed-length 
record blocks.
a simple CSV file, for example.
I don't like the behavior of parse regarding hyphens after delimiters, 
although it's the same as in R2 and there said to be needed for parsing 
csv data
>> parse/all {, ", ,} ","
== ["" { "} " "] ; this is as I expect

>> parse/all {, ," ,} ","
== ["" " " " ,"] ; I would expect == [""  {" } " "]
What I don't like about REBOL, is all those read-text, send-service, 
open-service and other tonnes of mezaninnes. But I think that actually 
I might reconsider my pov, and maybe I would prefer read-text or 
read-csv, which could incorporate tonnes of possible refinements, 
instead of giving 'read special /text refinement .... 'read is too 
low level in R3 ....
BrianH: Carl asked me, that if I want some of following implemented/noticed, 
we should put them to the parse document. I gathered them in R3 chat. 
So - I would like to open short discussion about following proposals:

I would like to ask, what will happen to proposals, which are not 
part of the do

cument? I mean - I do remember following kind of proposals, which 
are discussed
in parent thread:

1) make /all option be the default parse mode. If I want to get really 
results, I always use /all, to be safe

2) there was some proposal to add /space, or /ignore, to allow easier 
parsing of

 CSV, etc., but other proposal was, that it might be better to let 
 it to the enc
oder/decoder level

3) there was a proposal to allow quoting of "", to make parsing easier.
write parsing CSV like input, there was a proposal to solve it via 
external function or decode-csv aproach, which could internally use 
I have a small CSV parse and CSV generator library that we could 
start from.
I'm reading several times needs for CSV parsing enhancements, are 
they issues with parsing CSV files with R2's parse?
I'm actually messing around with REBOL (actually R2) and been away 
from it so long I forgot a lot of stuff.  Anyway, was wondering how 
R3 is progressing.  I was just using parse and thought about rejoin. 
 For example, if we can parse something such as:

blk: parse "this, simple, csv, string" "," 

wouldn't it be cool to just come back along and rebuild it with something 
such as:

rejoin/csv blk ","
I don't agree with such fundamental functionality of R2 to go into 
some optional library. Better to have some standard, than not. The 
same goes for easy read/lines equivalence. Dunno how current read/as 
turns out, if it will end-up as separate read-text or so, but we 
need easy way of having R2 analogy here. Those working with CSV data 
on a daily basis will greatly miss such features ....

parse "this, simple, csv, string" ","

I believe was meant to be removed, because it's too obscure. I think 
the intended function for this is SPLIT.
Group: !Cheyenne ... Discussions about the Cheyenne Web Server [web-public]
I've tried a .r, .csv, .exe, .zip. The only thing that affects things 
is whether the file size is larger than the post-mem-limit.
I'm currently reworking the response/store function. I'm considering 
dropping in-memory uploaded files mode, it was supposed to help processing 
uploaded data files (think CSV files for example) avoiding the disk 
write/read part, but it just adds complexity for a marginal gain. 
If anyone found that mode useful, please say so now.
I think the in-memory mode is not much needed for me. I was a little 
bit suprised why some files are in memory and other on disk. And 
usualy you would like to store the original file (for example the 
csv) before processing anyway.
but yes, the question is what is the best way to determine the JSON 
mode (or XML or CSV or ...)
Group: DevCon2007 ... DevCon 2007 [web-public]
sadly, xml is the csv of the web :-)
csv = complex structured values
Group: Power Mezz ... Discussions of the Power Mezz [web-public]
The particular script I am writing is called GET ADDRESS.  This script 
takes a CSV file called contacts which has first and last name, city 
and state of all of my friends that I'd like to get addresses for 
Christmas cards, but have forgotten or misplaced.

So far, the script takes each entry and sends it to SUPERPAGES.com 
where the HTML sent back contains the information.  Right now, I'm 
simply saving the HTML as a file for each entry in my CSV.

What I would like to do is somehow parse the HTML from it and extract 
out the address lines, zip code, phone number etc.  But I admit that 
parsing through HTML is daunting to me.  So after looking around 
on the internet, I discovered HTML-TO-TEXT in your Power Mezz.  

That is where I am now, trying to figure it out and see how it works. 
 I've read some of your documentation, but I admit, I am still in 
the dark as to how it works -- at least for my application.  Any 
advice you have is welcome.

Thanks in advance.
Group: !REBOL3 Modules ... Get help with R3's module system [web-public]
I have also started writing some simple charts to explain the details 
of the design and behavior of the module system. In CSV format. These 
charts helped a lot in the fixing of the problems and implementation 
of the tests. As with the tests, I will try to get the charts published 
somewhere official.
Group: !REBOL3 Source Control ... How to manage build process [web-public]
Also, can you point us to a concise summary of Git usage?  I've used 
CSV and SVN, but not Git.
1 / 149[1] 2