Mailing List Archive: 49091 messages
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

[REBOL] Re: RFC: Cross-language benchmark proposal

From: joel:neely:fedex at: 7-Nov-2002 14:18

Hi, Gregg, Gregg Irwin wrote:
> << > Languages: C, Java, Perl, Python, REBOL > >> > > What about VB? *Lots* of people use it, though it's not cross- > platform. >
If enough folks want to see it for comparison purposes, and enough folks have the means to run the benchmarks (we should have multiple participants for each benchmark task IMHO for statistical stability) I (for one) wouldn't throw up major objection (although my personal suspicion is that the target zones for VB and REBOL have significant non-overlap).
> << > Coding: Starting with the published versions on "Shootout", with > open contribution from the REBOL community for REBOL > versions in two flavors: > > Standard: Submissions must adhere to the "same thing" > rule from "Shootout". > > Custom: Any approach that gets the correct results. > >> > > As contributions are made, would custom entries for other languages > be allowed as well? Maybe a better question is: is the goal to show > off REBOL, even at the expense of other language proponents crying > "foul" because it gets special treatment? > > These are really two different types of benchmarks, right? The > Standard version is an algorithm benchmark; how a language performs > using a specified algorithm. The Custom version is a "result > oriented" benchmark. Just thinking about how to organize things to > make that clear. >
I agree that there are two issues; I'd suggest that the presentation of results could clearly separate "standard" submissions and timings from "custom" submissions. We're not trying to take on the whole field of algorithm analysis here; just show some points of "compare and contrast". Also, some of the optimize-by-design issues would actually require significant expertise in the various languages. Do we want to worry about finding/developing/cultivating expertise in all languages, or just provide a fair comparison along with some other interesting info?
> << > Solutions: Must take the form of an object with two methods: > > Rationale: This will allow use of a single test harness > (see below) to gather consistent stats. > >> > > I haven't looked at the tests to know if N is sufficient for all > purposes. Do you envision anything more elaborate being required? >
I'm looking at that now (but suspect that we can make sure that a single integer is enough of a "size" argument in some fashion, as in the case of the file I/O problem where the "size" is number of copies of a standard disk file).
> << > Partipants: Anybody who wants to contribute cycles, under the following > constraints: > > Tasks: All results for a given task (run in multiple > languages) must be submitted at one time. > >> > > So if I want to contribute, does this mean I need to write the > solution in each language, or is this just for test *result* > submissions? >
I see two populations (which may certainly overlap): Developers - who would write and submit source for the test harness and the individual tasks, and Testers - who would retrieve source for the test harnesses and tasks from a well-known source (once the harnesses and tasks had been designed and implemented), would run the tests as supplied, and report results. I would see a front-end period where Developers would haggle out details, agree on designs, and make their submissions, after which the source artifacts would be available to the Testers. Anyone with the appropriate skills and resources would be free to offer services as Developer and/or Tester, but there'd be some point at which a "release" of the sources for harness/tests would be frozen and made available for testing. Of course, all of this has to do with the "standard" solutions. The "custom" solutions could be handled somewhat more informally *after* at least the harness (for timing submissions) and a canonical "same thing" REBOL solution (for validity checking) were completed.
> What requirements will there be for implementation contributions? > E.g. the Ackermann example has shown that there may be multiple > approaches taken in REBOL. >
I think we should be fairly picky about the "same thing" rule; there are tons of ways of writing functionally and algorithmically equivalent code, and even more ways of writing functionally equivalent code that uses alternate algorithms. All of that is of no benefit to our Prime Directive (certainly at least initially) of showing how a *small* set of specific algorithms for reasonably common tasks perform in REBOL and a few comparable/competitive languages. Even adding a subordinate place for "custom" solutions that tweak for REBOL features/idioms is pushing the border of relevance IMHO, but might help newcomers get the feel of how the REBOL approach is different from the mindsets of some other languages. -jn- -- ---------------------------------------------------------------------- Joel Neely joelDOTneelyATfedexDOTcom 901-263-4446