Mailing List Archive: 49091 messages
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

XML / dialects

 [1/19] from: brett::codeconscious::com at: 7-Jan-2002 18:29


To my mind XML is a serialisation format for domain specific languages. Dialects, I think, are domain specific languages encoded with a REBOL syntax. What tools could be written to assist in the unpacking of XML documents/payloads into a REBOL dialect? Thoughts, corrections? :) Brett.

 [2/19] from: petr:krenzelok:trz:cz at: 7-Jan-2002 8:42


Brett Handley wrote:
> To my mind XML is a serialisation format for domain specific languages. > Dialects, I think, are domain specific languages encoded with a REBOL > syntax. > > What tools could be written to assist in the unpacking of XML > documents/payloads into a REBOL dialect? > > Thoughts, corrections? :)
no tools. Rebol XML parser can't even be called parser by todays standards. There is some nice piece of code from Kevin though, but I don't know where to find link - maybe thru rebolforces or just script library. I looked at various sites for XML stuff. http://xml.apache.org and even Mozilla.org provide some, but I find the stuff so complex, that I don't know where to start from. I am afraid that without proper XML parsers and tools rebol will never be regarded real player in middleware e-biz solutions, but all you will hear from this list is - you can implement it yourself ;-) Well, some folks will even try to tell you, there is SOAP support in rebol, while - there is not any, just some partical compliance stuff ... It would be really good, if we (rebol community), especially those of us, skilled rebollers, could start to look at the stuff and create some compliant parsers, being purely rebol based, or library wrapped (if there are any libraries for it ...) As for the topic you raised - as far as my knowledge goes, XML is just mark-up language, in which you are supposed to create your own sublanguages = dialects. They are represented by XML schemas ... but - there are some rules - you need probably proper parser, to dig info out of it ... I am currently reading the book about XML, SOAP and MS Bizztalk server. BizTalk app framework seems to be open standard, and there are examples in the book of how to create one, in some language called OmniMark. Does anyone use that language? According to book author, it is the best language he saw for streamed data manipulation on web. Maybe he just doesn't know about Rebol yet :-) -pekr-

 [3/19] from: petr:krenzelok:trz:cz at: 7-Jan-2002 10:40


Nice article about implementing XML stuff: http://www.sys-con.com/xml/article2a.cfm?id=240&count=13411&tot=17&page=1 -pekr-

 [4/19] from: joel:neely:fedex at: 7-Jan-2002 6:04


Hi, Petr, Petr Krenzelok wrote:
> Brett Handley wrote: > > What tools could be written to assist in the unpacking of XML
<<quoted lines omitted: 5>>
> but I don't know where to find link - maybe thru rebolforces or > just script library.
I must disagree mildly. The built-in PARSE-XML was what got me to look at REBOL to begin with (although I agree with you that it's not a robust, industrial-strength parser). It was just enough to let me do some experiments with XML in a rapid-turnaround fashion. There are links on http://www.REBOLforces.com to additional XML resources in REBOL.
> I am currently reading the book about XML, SOAP and MS Bizztalk > server. BizTalk app framework seems to be open standard, and > there are examples in the book of how to create one, in some > language called OmniMark. Does anyone use that language? According > to book author, it is the best language he saw for streamed data > manipulation on web. Maybe he just doesn't know about Rebol yet :-) >
The company site is http://www.omnimark.com but I found my last visit very disappointing. There used to be info about the language itself, including free download for personal use, but now there's a press release about their having been "acquired" and a bunch of rah-rah marketing barking. I looked at it briefly and recall it as being sort of a cross between AWK, Pascal, and COBOL. -jn- -- ; sub REBOL {}; sub head ($) {@_[0]} REBOL [] # despam: func [e] [replace replace/all e ":" "." "#" "@"] ; sub despam {my ($e) = @_; $e =~ tr/:#/.@/; return "\n$e"} print head reverse despam "moc:xedef#yleen:leoj" ;

 [5/19] from: joel:neely:fedex at: 7-Jan-2002 6:29


HI, again, Petr, I found a sample... Petr Krenzelok wrote:
> ... and there are examples in the book of how to create one, > in some language called OmniMark... >
Errol Chopping of MIT has an on-line tutorial for OmniMark at http://clio.mit.csu.edu.au/omnimark/ In his first chapter there are a couple of small samples to demonstrate OmniMark As an example, suppose a text file called 'timetable.dat' contains the complete timetable for a large university. A tiny fragment of the file is shown below, the actual file is very large and covers several hundred subjects taught in several hundred rooms throughout any academic week. EEB121 THE E/C PROFESSION: AN INTRO Subject co-ordinator: L. Harrison L Mon 1300 - 1350 S15 - 2.05 T1 Wed 1400 - 1450 C02 - 112 T2 Wed 1300 - 1350 C02 - 112 T2 Thu 1300 - 1350 S01 - 102 T1 Thu 0900 - 0950 S01 - 101 T3 Thu 1000 - 1050 S01 - 101 T3 Thu 1400 - 1450 C03 - 403 EEB322 ISSUES IN CARE & EDUCATION Subject co-ordinator: T. Simpson L Tue 0900 - 0950 S01 - 102 T1 Tue 1100 - 1250 C08 - 1.04 T2 Tue 1400 - 1550 C08 - 1.04 A list of all the times a particular room (say S01-102) is used might be needed. Finding this information is difficult to do manually because the whole timetable is sorted by subject, not by room. To find, collect and display the list of times we need to find all occurrences of the sequence 'S01 - 102' in the file and output the day and time information for these occurrences. By inspection we can identify some patterns which can be used to design the search: - the room information sequence occurs on a line of text; - each line starts with a one or two character code; - the day and time is before the room; - each day name is three letters; - the time is four digits, a space, a hyphen, a space and another four digits. An OmniMark find rule to locate and capture the day and time information might be: [Code Sample: C01T05a.xom] 001 process 002 submit file "timetable.dat" 003 004 find line-start any{2} white-space+ 005 (letter{3} white-space+ 006 digit{4} white-space+ 007 "-" 008 white-space+ 009 digit{4}) => dayAndTime 010 white-space+ "S01 - 102" 011 output "%x(dayAndTime)%n" 012 013 find any 014 Here the 'find any' rule (on line 13) consumes all characters not found by the first find rule so that the only output is that delivered by the statement on line 11; that is, all the days and times used for room S01-102. I assume that this example aims to give a feel for the notation; it certainly doesn't impress me with power. In Perl, for example, one can write: open (TIMES, "timetable.dat"); while (<TIMES>) { if (/..\s*([a-z]{3}\s\d{4}\s-\s\d{4})\sS01 - 102/) { print "$1\n"; } } The second example is a bit more interesting... As well as parsing, OmniMark allows any SGML or XML document to be translated into any other arbitrary format. A fragment of XML is shown below. It contains a group of people: <!DOCTYPE PEOPLE SYSTEM "people.dtd"> <PEOPLE> <NAME>Mary Smith</NAME> <CITY PCODE="2795">Bathurst</CITY> <COUNTRY>Australia</COUNTRY> <NAME>Wally Wallpaper</NAME> <CITY PCODE="2222">Hurstville</CITY> <COUNTRY>Australia</COUNTRY> <NAME>Sam Widge</NAME> <CITY PCODE="1234">Bangalore</CITY> <COUNTRY>India</COUNTRY> </PEOPLE> An OmniMark program containing element rules can be written to process this XML. As a trivial example, the following rules output all the peoples' names and postcodes. Each name and corresponding postcode is placed on a separate line and a tab character is inserted between the name and the postcode. The output file is thus a tab-delimited file which could easily be imported into a spreadsheet. [Code Sample: C01T06a.xom] 001 process 002 do xml-parse document 003 scan file "people.xml" 004 output "%c" 005 done 006 007 element people 008 output "%c" 009 010 element name 011 output "%c" 012 output "%t" 013 014 element country 015 suppress 016 017 element city 018 output "%v(pcode)%n" 019 suppress 020 With this kind of process, the XML (or SGML) data is streamed into OmniMark and is parsed against a DTD. Then each element is fed to the program. As the program sees each element, one of the element rules is fired and does the appropriate work with the element's content and/or attributes. Even without too much previous knowledge of SGML, XML or OmniMark the program should be reasonably easy to follow; the symbol %c is a reference to the content of each element and the %v symbol is a reference to the value of an attribute. The statement 'suppress' avoids firing rules for the content of an element. Note that the programmer does not need to worry about low level details like finding angle brackets, element names or attributes in the raw data - OmniMark handles all of this and leaves the programmer with the high-level task of doing something with the information. Maybe someone would enjoy coding the equivalent of both examples in REBOL... -jn- -- ; sub REBOL {}; sub head ($) {@_[0]} REBOL [] # despam: func [e] [replace replace/all e ":" "." "#" "@"] ; sub despam {my ($e) = @_; $e =~ tr/:#/.@/; return "\n$e"} print head reverse despam "moc:xedef#yleen:leoj" ;

 [6/19] from: petr:krenzelok:trz:cz at: 7-Jan-2002 14:01


Joel Neely wrote:
> Hi, Petr, > Petr Krenzelok wrote:
<<quoted lines omitted: 15>>
> not a robust, industrial-strength parser). It was just enough to > let me do some experiments with XML in a rapid-turnaround fashion.
OK, I know. I also know RT can't do everything themselves. But what about following conditions?: - I am not skilled enough to program standards compliant parser myself - I don't have enough free time to create one. - If some manager inside our company looks at rebol site, he/she will not find any features listed as: "support: XML 1.0, SOAP 1.0, UDDI, ebXML," etc. etc., so for such manager - Rebol simply does not support the features, and argument of type - we can implement it ourselves don't simply count for big boys ... - we would eventually pay for the features coming with Command update, along with other databases as PostGress, DB2, etc. RT is currently focused on another market - collaborative one. That's perfect, Rebol has strong potential there, but ... :-)
> There are links on http://www.REBOLforces.com to additional XML > resources in REBOL.
<<quoted lines omitted: 11>>
> a press release about their having been "acquired" and a bunch of > rah-rah marketing barking.
Yes, I looked there too. The book is rather new and it claims it comes for free. It even uses the same arguments for using Omnimark instead of Perl, as we use with Rebol :-) My intention was to look at MS BizTalk app framework and create one in Rebol, if it really comes for free, but I don't believe I am myself capable enough bringing such complex task to life ... -pekr-

 [7/19] from: pwoodward:cncdsl at: 7-Jan-2002 9:08


Petr -
> OK, I know. I also know RT can't do everything themselves. But what about > following conditions?:
<<quoted lines omitted: 7>>
> - we would eventually pay for the features coming with Command update, > along with other databases as PostGress, DB2, etc.
I sort of have to agree with you here - to an extent. However, I also think that the play that RT is making is a little different than the traditional enterprise approach. Stronger XML support would definitely be a "good thing" (tm) - as REBOL will have to play nice with the other kids on the playground. Along with that, both SOAP and UDDI are clearly needed - even if they won't necessarilly be used in their intended fashion (i.e. discovery and usage of some centralized resource), but rather as a mechanism to facilitate distributed computing.
> RT is currently focused on another market - collaborative one. That's > perfect, Rebol has strong potential there, but ... :-)
Here's some of my thoughts on that front - some of you may have already seen these thoughts - but it was a realization that I came to after a lot of work with P2P (Groove, Gnutella, etc). ----- Thoughts Follow ------ I finally figured out a key argument against the centralized "enterprise" model of application building. Decentralization of infrastructure makes sense (esp. for a company like ours who has an ASP-like model) for the following reasons: If you're down that doesn't mean that everyone is down. Just because your website is down, doesn't mean that people cannot still get work done. When your whole application is built around a large, central data store, with a web front end - if it (or any part of it) goes down, you're out of business until it comes back up. Whether it's your web-server, your application server, or your database server, if one of these core systems goes down - the whole application is down, and the people who depend on it to get things done are out of luck. This precipitates a whole ream of expenses to enhance uptime, reliability and availability. Complex backup schemes are implemented. Fail-over hardware is bought and setup. More money is spent on hardware and software that may never be used - unless something crashes. Now that you've got all the extra hardware and software, you've got to hire more people to maintain it - because maintenance will be a nightmare. Multiple web-servers, app-servers, and fail-over database servers. Intrinsic in all of these are some key problems. Replication of data between the live, and fail-over databases is a must. Unfortunately I've never seen a scheme to do this which is uncomplicated, and foolproof. And on the middle, application tier you end up building complex, stateless systems to compensate for a variety of failures - bundling several operations together as an atomic instruction in order to avoid partially committed transactions. Finally on the front-end, web-servers, you have to implement a complex system to enable transferral of session data - or make session data universally available to all web servers. Needless to say, this can be a nightmare! If your data is centralized - then you're vulnerable to a variety of attacks, from hackers to natural disasters. Do you really want to be responsible for your customers data? Why not let them be responsible for their own data? Take yourself out of the loop so that you can concentrate on building new features and a better application without having to be responsible for maintaining all that data. You have to implement complex backup schemes, buy more hardware to support the backups, get specialized software to backup live systems - and you might need fail-over backup systems. Then, you'll need to investigate off-site storage of backups. Disaster recovery plans are also necessary in the face of all that centralization as well. Decentralization, and distribution of computing responsibilities won't allevieate these problems 100%, but they may help reduce them. By distributing the data, and letting the userbase hold onto their own data, you automatically gain multiple backups of that data, without any additional work. This leaves you with the remaining problems: 1> Security. Which is already a problem under the current, de-facto, decentralized computing system. Each users PC currently represents a security risk, and additional entrypoint. Distributed computing doesn't change this. - However, strong encryption of the data exchanged between peers in a distributed system does help reduce risk. - Additionally user understanding that their data is on their system may help improve awareness, and responsibility. 2> Management. Again, already a decentralization problem. Decentralization and distribution will still need a mechanism to allow for global management. - Current computing systems are already distributed (desktop PCs, servers) and already have centralized management tools. - IOS will definitely allow a form of "push" to update reblets, and the client application itself. Without having to go around to every desktop in the org (which is enough of PITA even when everyone is in the same building). I do realize that IOS is going to have some level of server application (the scenario online mentions it). However I'm just guessing but this should be much more lightweight than your traditional Web server, App Server, Database backend. Depending on the nature of the application, I'm sure that those sorts of thing might be required - but in some cases I don't think it really is. And in any case I'm sure that it will eventually be possible to distribute the responsibilities of the server to various clients (sort of having "super" clients that function as partial backups of the server). ----- Thoughts Stop ----- Anyway, when you start to think of REBOL in this fashion (at least as used in the Enterprise) then you start to see why and where RT might be focusing their efforts. - Porter

 [8/19] from: sunandadh:aol at: 7-Jan-2002 9:47


Hi Petr,
> OK, I know. I also know RT can't do everything themselves. But what about > following conditions?: > > - I am not skilled enough to program standards compliant parser myself > - I don't have enough free time to create one.
Let me freewheel a few thoughts from the above three statements. There is no decent way for tool developers to make money developing tools for and in Rebol. We have an excellent script library, and some great snippets on this list, but no real incentive for anyone to sit down and put several week's (or more) work into a masterpiece. We need some additional mechanism to allow the secure sale of Rebol application code. That mechanism almost exists with Encap, so with a few tweaks, RT could kick off a market rebolution. Here's one way it could work. For some relatively small sum paid to RT (say USD10) I can activate the encap-ability of my otherwise freebie Rebol/View. That enables my Rebol/View to run "encapped" programs for which I hold a key. So you (or anyone else) can sell me "encapped" code for whatever you want to charge. If the price is right, and your code reputation is good and it's a tool I need, I'll buy. The only thing left is how do you "encap" your code? Well RT offer two mechanisms today. You could use a tweaked version of either of those. And here's a third that could help us small developers. We submit code to RT for encapping and sale. If they like the code, they'll "encap" it and offer it for paid-download from their site. They take a cut, the Rebol world gets a new tool, and you get some money from it! Where's the downside? Sunanda.

 [9/19] from: petr:krenzelok:trz:cz at: 7-Jan-2002 16:14


Nice concerns, Porter. Here's a cut and paste of my IOS messages to Steve: ------------ Steve: I am stil confused re distributed database. You know - I can imagine off-line, asynchronous, queue based messaging, being it text files or SOAP/UDDI services, transfering data = records, but I am still confused of where and how data should be stored distributed database also means data duplication, and I can imagine IOS model working for very lightweight model of data, but without advanced storage capabilities, advanced replication capabilities, I am not sure it is viable .... Conference messages e.g. will grow to some unmanageable thousands of them. What if you want to build knowledge base out of them? We are starting to think out new regional information system, suited well for information centre, as well as for kiosks in some future, mobile devices etc. First I thought of local rebol version, with some generate-on-demand output capability - simply I thought web site will be just plain and simply kind of output device . But once you want ppl to choose some option, create some queries, you need some logic put on your webserver machine. So, I am not decided yet of how to start to deal with it. I can't base it purely on Rebol, as I don't believe Rebol will be available for all the devices, nor do we have proper View browser integration etc. So, am I better with robust SQL backend, surrounded with powerfull Apache + Fast-CGI(= nice distribution, especially in External mode) + Rebol environment, being service/message/website-generation/whatever based? .... --------------- Now to some of your points: You raise the issue of complex replication schemes, hardware and app-server set-ups, etc. But - what about decentralised data storage?: -You need some central place which will drive your sync. mechanisms. (In the case of IOS it is cetral IOS server - /Serve). - you need to back-up the data anyway - I don't believe you will bet on one of your company PCs, that can potentially meet HD failure or something like that. - data duplication could mean real death to data validity. In our SAP R2 and surrounding systems our customer address info was duplicated in 3 or more systems. The systems can't even talk one to each other - completly mess here ... But maybe I just anticipate where are you heading your thoughts. For e.g. in our large enterprise, our programmers are infected by thinking in SAP terms. On-line processing? Nah, why? Who needs info on-line? We will do it in job? Is it enough for you to run the job on a daily basis? I don't like such aproach - my vision is - ZLE - zero latency enterprise: - Why to be always dependant upon "live" connection? - Why to wait for the job task to complete? - messaging is the way imo - asynchronous, small messages based system, with ability to work off-line, spooling data to database or directories, not requiring accepting system to be "live and listening at the time", the system can be in maintanance mode yet still not affecting system sending info ... ... but my vision is still ... uncomplete and unproven ... :-) -pekr-

 [10/19] from: gavin:mckenzie:sympatico:ca at: 7-Jan-2002 11:02


*sigh* This is a topic dear to my heart. I just wanted to express some opinions. Brevity isn't my strong point, so feel free to save yourself from reading a *long* post of one guy's ramblings and reach for your delete key. First, to respond to Brett's original question...I agree with what Petr said: XML is a "meta-language". It is a means to create domain-specific markup languages that may be used to represent documents (like XHTML), data (like so many of the e-commerce XML languages), or declarations that impact process (such as messaging, XML-RPC, SOAP, etc.). Every minute of every day during my job I deal with XML and its many children. I am passionate about the stuff. I'm the W3C AC Rep for my employer. I've led development teams that have built implementations of XML-DOMs, XML-Digital Signature, Schema tools, etc. I've out-sourced development of XSLT and XPath engines. Truly, XML is good. Indeed, there's been alot of over-hype on XML and it ain't all perfect (some of it can be downright dreadful), but it is still IMO the biggest thing since the Web. Yeah, I know that sounds like a tired pitch line...but really, I believe it. And, it is pretty old stuff. Not paying any attention at all to SGML's history, we can still say that XML is just a month away from its fourth birthday. That's old. IMO really old. The list of other technologies that has descended from XML is long, and still growing. Though I am primarily a lurker on this list, and I'm far from a REBOL expert, I got into REBOL two years ago. I was overjoyed by a language that wasn't trying to be all things to everyone, was (mostly) easy to write (and easy to read!), was small and ideally suited for the Web. Within two days of playing with REBOL I sent an email to REBOL tech asking Um, where's the XML parser? I assumed that I must have missed it in the docs. I also expressed my views that XML was destined to be the very air that Web applications breathe and the REBOL could (with some work) position itself as the XML processing script language of choice. I must say that I was truly pleased by the fact that REBOL did respond to my emails, and we exchanged a couple more. I was told that an XML parser was coming. I stopped playing with REBOL for a while, came back, and lo 'parse-xml' had appeared. Maybe it was there all along, I dunno. But it wasn't (isn't) a real XML parser; i.e. compliant. As a result of parse-xml's non-compliance, I tried hooking up James Clark's world-class EXPAT parser lib to REBOL with a beta version of REBOL/Command, but got stymied by the call-back nature of mostly all parsers. And of course, there's the little problem about lack of Unicode support in REBOL. Regardless, I had real ad-hoc work that I wanted to do with REBOL, and (almost) all the information that I needed to process was XML. XML that would often break REBOL's built-in parser. Why is a real XML parser important to any software application or tool? My employer, and the commercial software that I help to architect, have fallen victim to the fact that we've had a few customers (who really were the real victims) build hand-code XML parsers to process the XML that our software produces -- and eventually their parsers break. Why? Because, invariably from one version of our software, to the next, we make changes to our XML formats. But, and here's the rub, those changes often aren't changes that materially alter the XML formats -- that is, they wouldn't upset a real XML parser or change the "schema" of the format, but they wreak havoc with people who have built hand-code parsers that don't behave as robustly as a real XML parser would. Since then, I've tried to get the word spread to all of our customers around the globe that they need to use real XML parsers. The contract between our software and our customers is XML 1.0, not a subset. Anyway, I love REBOL. I don't want to use some other script language when I've got ad-hoc processing of XML to do. Those other languages may well have robust XML tools and be web savvy (take you pick: Perl, Python, Ruby, PHP), but I can be more productive with REBOL. So, I trundle on hoping that one day I'll see the foundation for an XML framework in REBOL. That one day I'll be able to espouse the virtues of REBOL instead of Perl (or some other language) for building their 'glue' applications that work with my employer's software. And without a strong foundational layer of XML support, how will I (or my customers) ever be able to use REBOL to do higher-level XML work? XML work eventually focused on things like XML-Schemas, XML-Digital Signatures, ebXML, and so on. Until then, I'll have to be hopeful. And, as I encounter gaps in REBOL's XML foundation, I'll try to make available bits of my own REBOL code to bridge the gaps, and I'll also leverage the work of others in this great community of people who love REBOL. But gee...fourth birthday of XML on Feb 10 2002. That's old. Time's-a-wasting. When does hopeful optimism become something less? Gavin.

 [11/19] from: pwoodward:cncdsl at: 7-Jan-2002 12:12


Petr -
> Now to some of your points: > > You raise the issue of complex replication schemes, hardware and
app-server
> set-ups, etc. But - what about decentralised data storage?: > > -You need some central place which will drive your sync. mechanisms. (In
the
> case of IOS it is cetral IOS server - /Serve).
Absolutely true - but it's relatively lightweight (I think it's an Apache add-on). And, in theory, Reblets and apps deployed on IOS should be able to function w/o access to the server. Of course, that does mean your copy of data will be out of sync.
> - you need to back-up the data anyway - I don't believe you will bet on
one of
> your company PCs, that can potentially meet HD failure or something like
that. I'm not sure about that. What if the data was transparently replicated to a few other peers in a group? Say your data was actually (unbeknownst to you) replicated securely to 5 other desktops in your org. Would that be enough redundancy?
> - data duplication could mean real death to data validity. In our SAP R2
and
> surrounding systems our customer address info was duplicated in 3 or more > systems. The systems can't even talk one to each other - completly mess
here ... I think there will still be a place for large, centralized data stores. But not everything needs to run that way.
> "ZLE" - zero latency enterprise: > > - Why to be always dependant upon "live" connection?
I think that's where RT is trying to go. In theory (I haven't really seen a system built on it) IOS should be able to function w/o a live connection. It'll sync up when it can.
> - Why to wait for the job task to complete?
It shouldn't. The request should go out - and the client should act as a server with a background task of watching for responses.
> - messaging is the way imo - asynchronous, small messages based system,
with
> ability to work off-line, spooling data to database or directories, not > requiring accepting system to be "live and listening at the time", the
system
> can be in maintanance mode yet still not affecting system sending info ... > > ... but my vision is still ... uncomplete and unproven ... :-)
As is mine, and I'm sure RT has a little ways to go yet in this regard. The best examples I've seen with this so far have been in Groove (it's a bit sluggish for my taste) - where a server can be equipped to work as a gateway to legacy services. One cool thing is the ability to use the IM process of Groove to post to a web page (the "user" you message is actually a bot). Ultimately I see P2P delegating more processing and data redundancy to the clients - but there will still be centralized services. - Porter

 [12/19] from: jason:cunliffe:verizon at: 7-Jan-2002 3:38


> I am currently reading the book about XML, SOAP and MS Bizztalk server. > BizTalk app framework seems to be open standard, and there are examples in > the book of how to create one, in some language called OmniMark. Does > anyone use that language? According to book author, it is the best
language
> he saw for streamed data manipulation on web. Maybe he just doesn't know > about Rebol yet :-)
hmm.. You might appreciate the good series of articles by David Mertz. His focus example are usually in Python, but he has a keen and witty mind and not afraid to think for himself. The latest is about XML-RPC: http://www-106.ibm.com/developerworks/xml/library/x-matters15.html?open&l=81 0,t=grx,p=rpc <quote> XML-RPC is a remote function invocation protocol with a great virtue: It is worse than all of its competitors. Compared to Java RMI or CORBA or COM, XML-RPC is impoverished in the type of data it can transmit and obese in its message size. XML-RPC abuses the HTTP protocol to circumvent firewalls that exist for good reasons, and as a consequence transmits messages lacking statefulness and incurs channel bottlenecks. Compared to SOAP, XML-RPC lacks both important security mechanisms and a robust object model. As a data representation, XML-RPC is slow, cumbersome, and incomplete compared to native programming language mechanisms like Java's serialize, Python's pickle, Perl's Data::Dumper, or similar modules for Ruby, Lisp, PHP, and many other languages. In other words, XML-RPC is the perfect embodiment of Richard Gabriel's worse-is-better philosophy of software design (see Resources). I can hardly write more glowingly on XML-RPC than I did in the previous paragraph, and I think the protocol is a perfect match for a huge variety of tasks. To understand why, it's worth quoting the tenets of Gabriel's "worse-is-better" philosophy: Simplicity: The design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design. Correctness: The design must be correct in all observable aspects. It is slightly better to be simple than correct. Consistency: The design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. Completeness: The design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface. </quote> enjoy ./Jason

 [13/19] from: jason:cunliffe:verizon at: 7-Jan-2002 12:58


<[SunandaDH--aol--com]> wrote:
> We need some additional mechanism to allow the secure sale of Rebol > application code. > > That mechanism almost exists with Encap, so with a few tweaks, RT could
kick
> off a market rebolution.
oh, YES..
> Here's one way it could work. > > For some relatively small sum paid to RT (say USD10) I can activate the > "encap-ability" of my otherwise freebie Rebol/View.
great idea
> That enables my Rebol/View to run "encapped" programs for which I hold a
key.
> So you (or anyone else) can sell me "encapped" code for whatever you want
to
> charge. If the price is right, and your code reputation is good and it's a > tool I need, I'll buy. > > The only thing left is how do you "encap" your code? Well RT offer two > mechanisms today. You could use a tweaked version of either of those. And > here's a third that could help us small developers. We submit code to RT
for
> "encapping" and sale. If they like the code, they'll "encap" it and offer
it
> for paid-download from their site. They take a cut, the Rebol world gets a > new tool, and you get some money from it!
That's a good idea for motivating and rewarding people who build significant REBOL applications or tools. A good example, I have been grappling with very recently: A web site development has had its security compromised. From now on the site needs secure client access tols such as SSH2 Telnet and better FTP. What to do? 1. - Buy 3rd party commercial software such as those offered by F-Secure, etc. F- 2. - Apply/Develop a REBOL-based solution. I'd much rather propose a cross-platform REBOL tool. One which I am free to offer, at whatever price and terms are appropriate. I believe people would not mind paying a small amount of money ($5-$50) for a custom tool. Especially if that fee is invisible, ie: factored into their basic service fee. And if it makes it easy for authors distributors or end-users to upgrade transparently or via a visible pay-per-XYZ scheme A major advantage of REBOL and REBOL/View is that it _is_ cross platform. No need to make people keep buying/installing when they switch machines/locations. In many file transfers, people are not really concerned about the data itself. But they are very concerned about the CONTROL DATA [logins passwords protocol acknowledgements etc]. Perhaps we need to think about a REBOL dialect for this aspect of 'secure transactions'..? This is the sort of thing design patterns folk spend a fair bit of time thinking about, and XML-ers writing models of. The key handles for this REBOL 'secure transactions' dialect might live in the default header REBOL[], or perhaps use its own similarly styled header. For example where we write now: do %somefile.r we might also write: do/login do/register do/secure %somefile.r do/tranaction %somefile.r do/upgrade %somefile.r do/pay paypal://%somefile.r do/accept paypal://%somefile.r do/billing %somefile.r do/receipt %somefile.r do/distribute %somefile.r The point is to use REBOL's many strengths. To being a very pragmatic platform which does not assume any e-platform dependency, but encourages a higher level human-oriented approach. Gavin's post about XML raises a lot of good points. But I also believe that ther are aspects which XML development does not adderss adn indeed distracst designers adn prorgmraer from thinking about. like ease-of-use ;-) Assuming that XML or variations are here to stay globally, and that REBOL gets better XML tools soon, I think it will still need REBOL-istic support for transaction handling.
> Where's the downside?
For any of your ideas to work well, I think REBOL needs at least a standard default tool + paradigm for propagating secure passwords and time-limited keys. Something people can build upon. Some mechanism which can scale from developer-to-developer all the way up to commercial shrink-wrapped REBOL packages. A downside would be anything which hinders open development and sharing as REBOL community does now. A downside is any scheme which depends too much on a centralized site for handling. REBOL is aimed at non-centralized "x-internet" architectures. As you point sorely lacks a few key elements to allow and encourage easy commercial distribution. ./Jason

 [14/19] from: sunandadh:aol at: 7-Jan-2002 19:03


Hi Jason, Thanks for the comments....
> > Where's the downside? > > For any of your ideas to work well, I think REBOL needs at least a standard > default tool + paradigm for propagating secure passwords and time-limited > keys. Something people can build upon. Some mechanism which can scale from > developer-to-developer all the way up to commercial shrink-wrapped REBOL > packages.
I'd thought of that too. It needs to be built into the core so that encapped applications can run for (say) 30 days or 50 invocations or 100 files-converted or whatever,
> A downside would be anything which hinders open development and sharing as > REBOL community does now.
That's true. But the script library and this list would let us all still share snippets and insights. And the "professional" developers could post reduced-function versions of the "commercial" products as source code in the script libraries. That'd whet some people's appetites for the "real thing" while allowing access to their source code so we could see how good they are at Reboling. Sunanda.

 [15/19] from: petr:krenzelok:trz:cz at: 8-Jan-2002 5:56


Thanks Gavin, I would not say it better. Your response is the best so far - half-way solutions are meaining trouble if opposite site changes something. We need parsers able to comply to their standard specs .... -pekr-

 [16/19] from: brett:codeconscious at: 8-Jan-2002 21:44


First off, thank you everyone for some very good reading on this thread. It has certainly branched and grown in quite a vew interesting directions. Especially Gavin's heads up on hand-coded parsers. I enjoyed the article references too. Maybe though I should have put the subject line as Dialects / XML (order is significant)? I'm curious about the ways that information, delivered by XML, can be expressed in Rebol. Also about any useful things that can be done to automate this work. Whether Rebol dialects can be automatically, even if partially, created to mimic/equate/simulate/... the domain specific language that the XML is encoding. Maybe XMLSchemas/DTDs/RelaxNGs/etc.. need to be read into Rebol forms. I don't know, I'm just speculating, wondering out loud and possibly but unintentionally just making a nuissance of myself. Putting it another way, it is hard enough to learn a domain specific language. I feel that it is too hard to have to do that *and* process it in terms of another language (XML) just because that is the "transport". XML is at a different "semantic level". I'd like the XML encoding to be transparent to me so that I can code in the terms of the domain specific language as easily as I code VID. Actually VID is an interesting example. What would it look like as XML? How would we write a program to interpret an XML-VID and build a GUI directly? Maybe someone should encode a VID layout as an XML document (not to forget building a DTD or whatever). Then we convince some ambitious soul to make a program to interpret such a document via an XML parser, and build a working GUI directly. Then we can sit back and say "Ah good job dude. You've used state of the art technology and current thinking to produce that display. But couldn't you do it in one line rather than pages of code?". But that would be too cruel.. :) Brett.

 [17/19] from: pwoodward:cncdsl at: 8-Jan-2002 10:25


Brett - It's definitely clear that XML is of some interest to the REBOL community. Most of my work with XML has been done with Java, using the Xerces parser - SAX basically. Although I've also done some work with it under ASP - it's quite conveinient to retrieve recordsets as XML from a database, and use an XSL to transform it for display...
> Actually VID is an interesting example. What would it look like as XML?
How
> would we write a program to interpret an XML-VID and build a GUI directly? > Maybe someone should encode a VID layout as an XML document (not to forget > building a DTD or whatever). Then we convince some ambitious soul to make
a
> program to interpret such a document via an XML parser, and build a
working
> GUI directly. Then we can sit back and say "Ah good job dude. You've used > state of the art technology and current thinking to produce that display. > But couldn't you do it in one line rather than pages of code?". But that > would be too cruel.. :)
That would be pretty cool - as in theory one could use an XSL to transform the XML representation of a View layout to another format. Oldes has been doing a lot of work creating a Flash dialect so that Flash files can be generated using REBOL. One possibility might be exporting a layout to XML, so that it could be readily transformed into other dialects via XSL? That way (depending on the complexity of the interface) a View layout could be exported and transformed into Flash, HTML, or some other interface representation... Reading XML in would ideally be handled in a very OO way. Unfortunately (or fortunately) this seems to be the best way to manage things... You would end up with an object that represents your XML within REBOL. Elements could be accessed using the standard REBOL path nomenclature, along with the attributes of those elements. Another thing that would be useful would be extraction of XML fragments via path access, copying them into a new XML object... It would need some sort of mechanisms to iterate across the elements of the document too. For example: <person gender="male"> <firstname>Porter</firstname> <lastname>Woodward</lastname> </person> person: to-xml-object read %person.xml print person/firstname ==> Porter print person/gender ==> male At least these are just some of my thoughts on how this might be done. - Porter

 [18/19] from: brett:codeconscious at: 9-Jan-2002 8:13


Hi Porter, Thanks for your reply.
> That way (depending on the complexity of the interface) a View layout
could
> be exported and transformed into Flash, HTML, or some other interface > representation...
That could be interesting, but was not my intention.
> Reading XML in would ideally be handled in a very OO way. Unfortunately
(or
> fortunately) this seems to be the best way to manage things... You would > end up with an object that represents your XML within REBOL. Elements
could
> be accessed using the standard REBOL path nomenclature
Gavin has provided code along these lines already. I actually wanted to point out via the example the opposite. That if you had a VID layout in XML, you would definitely not want to handle it in an OO way. You would want to receive a VID block.
> For example: > > <person gender="male"> > <firstname>Porter</firstname> > <lastname>Woodward</lastname> > </person> >
I'd like to see [person male "Porter" "Woodward"] or something else dialectical. Of course a grammar is implied here - but should be part of the process.
> person: to-xml-object read %person.xml > > print person/firstname ==> Porter > print person/gender ==> male
Gavin called it "xml-to-object". Have a look at: http://www3.sympatico.ca/gavin.mckenzie/ I'd like to see the expressivity of XML brought up to Rebol not Rebol's expressivity brought down to the level of XML. But I appear to be seriously out-of-step with everyone else! :) Brett.

 [19/19] from: gerardcote::sympatico::ca at: 13-Jan-2004 21:56

domain specific languages design info and REBOL DOCS


Hi Jason, while I followed a link about Rob Mee related to the object above I found some resource pool at the ACM portal. The following link is your open door for more reading on the subject and many related ones ... http://portal.acm.org/toc.cfm?id=949344&type=proceeding&coll=GUIDE&dl=ACM A couple of the presented papers particularly attracted my attention but I will leave you select your own readings ... ================= Subject change ================= And while I think about it since you proposed me some help to start with Vanilla I think I will acceot your offer and use Vanilla as a starting point to share my thoughts about REBOL learning and the code snippets I will find and receive from contributors to use or create myself all along the way. I would not start anotherone for this work but complete the existing RIT site which started around 278 days ago!!! http://compkarori.com/vanilla/display/rebol-main If ML members that want to comment what I and other contributors will deposit there in the run it could be interesting and useful to create some space to exchange as Petr and you suggested. I have yet to evaluate the different advantages of each offering from a practical point of view. A world under AltMe would do as well as a Web forum - REBOLTALK maybe (be it written in PHP or whatever - for me the main reason I use the tool is not to show REBOL power - this will come later when we'll have our completed REBOL DOC and learning toolkit) but In the short term I will probably opt for the second choice since it will offer a greater visibility for the rest of the world during the process. For me IOS also suffers the same visibility problem than AltMe here. Vanilla is less prone to this visibility issue since it produces Web output but I don't know if it will be as easy to follow the many threads that would eventually originate from this collective work since every snippet is separated from the others while reviewing them. The script library with its new add-ons for ML archives browsing is also very interesting and even the presentation Carl did of the recent ML list is very interesting in some way. But none of them permits us to directly send in any way some REAL-TIME msgs to the existing list - be it to annotate or add new material. And this is what is missing in this case. Will be back to work soon. For themoment I review some recent VID and View material submitted to the Library by Cybarite under the name VID-usage.r Don't miss it. This is an enhanced version of the original easy-vid.r written by Carl. My appreciation : it's an A Even if I found some small glitches I will send him back soon as a feedback I can ensure that the done work merits many congratulations. It was a long awaited item on my wish list. Many well kept secrets are revealed and well documented. Many contributors also shared some code and explanations to help Cybarite deliver this final collaborative masters-piece but the final merit come to Cybarite and I thank him very very much in my own name and in the name of all those readers that will read and reread it for some time before mastering all the aspects that VID and its numerous styles deliver for us... Regards, Gerard

Notes
  • Quoted lines have been omitted from some messages.
    View the message alone to see the lines that have been omitted