Mailing List Archive: 49091 messages
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search

[REBOL] Cross-language benchmarking

From: joel::neely::fedex::com at: 5-Nov-2002 11:38

Hello, all, Pardon my manners in not replying to individual remarks, but "so many emails, so little time!" ;-) Let me raise a couple of points about what we might accomplish with such an effort, and who might be in the audience, as a way of thinking about how such an effort might be done, and what languages might be included. Option 1 ======== Goal: Demonstrate to the world that REBOL is a viable, competitive language for many common programming tasks (where "competitive" is defined in terms of run time performance). Audience: Developers at large. Languages: Commonly used languages, to maximize likelihood that the reader will be familiar with one or more of the other "competitors": c/c++, Java, Perl, Python, VB, ... Tasks: Toy problems that allow specific aspects of performance to be instrumented/measured. Also some small "real" tasks in REBOL's "sweet spot" of performance. Comment: The tests must be fair, and must be seen to be fair. We've all seen the kind of unvarnished advocacy that claims things like "X is faster than Tcl, uses fewer special characters than Perl, and is cheaper than COBOL on an IBM mainframe, therefore it's the best!" which only hurts the credibility of the claimant. Option 2 ======== Goal: Demonstrate to the world that REBOL is a viable notation for use in RAD/prototyping tasks, and makes a good "tool to think with". Audience: Same as (1) Languages: Same as (1) Tasks: Reasonably small "spike" (in the XP sense) tasks that would be recognizable as related to application-level needs. Comment: It's also fair to include code size and programmer effort in such discussions, but these are notoriously difficult to instrument objectively. Option 3 ======== Goal: Identify and document REBOL programming techniques that have substantial effect on performance, and build a vocabulary of "performance patterns" for future use. Audience: REBOL programmers. Languages: REBOL only Tasks: Those situations where reasonably small effort in refactoring/strategy could produce significant gains in performance. Option 4 ======== Goal: Help identify "hot spots" in REBOL where performance optimization would have significant perceived value. Audience: RT Languages: REBOL and limited number of "compare for reference" alternative languages. Comment: I want to be clear on this one: I'm not suggesting that RT staff are unaware of performance issues! Not at all! However, several on this list (including RT staff) have observed that the good folks at RT don't get any more hours in the day than the rest of us (and therefore have to pick and choose where to put their mere 20 work hours per day ;-) If the list members can help spread the load of finding issues worthy of attention, help (by participation) to indicate which performance issues are considered higher priority than others, or even find one glitch that has escaped notice to date, then I think the effort would be a net gain. General Comment =============== Benchmarking is tricky business at best, and A Dark Art at worst. For results to be meaningful, the sample base must be large enough (and the indidual tests must be large enough) that transient or special-case effects get averaged out (e.g., garbage collection that happens now-and-again during series flogging, differences in performance due to different CPU speeds, free memory, disk I/O bandwidth, network bandwidth/congestion, concurrent processing on computers with real operating systems, etc). It will be of little use (except to the submitter! ;-) to have a single benchmark comparing REBOL to Glypnir on an Illiac IV. The strong benefit IMHO to using primarily cross-platform languages is that it allows us to perform the tests under the widest possible range of conditions, thus improving the quality of our averages. That said, there's probably room for a widely-used proprietary language (e.g., VB) since that's likely familiar to a significant portion of the target audience for options (1) and (2). We just need to be careful to have the widest possible set of alternatives run on *the*same*boxen* as the proprietary cases, so that we can make meaningful use of the results. (E.g., a single comparison of REBOL vs C++ on a box running XPpro would be hard to interpret beyond the obvious "which one is faster?") -jn- -- ---------------------------------------------------------------------- Joel Neely joelDOTneelyATfedexDOTcom 901-263-4446