World: r3wp
[Profiling] Rebol code optimisation and algorithm comparisons.
older newer | first last |
Izkata 20-May-2010 [260] | It is? I've only used it for relatively small data, so never had a problem.. |
Steeve 20-May-2010 [261x2] | Besides you don't need to store triples as blocks (they can be flattened) in the triple store, then you'll just need to do some math to calculate the index of a triple. The advantage is that you could still use the triple store do some slow searchings. |
... with find | |
Terry 20-May-2010 [263x8] | yeah, i thought of that, but once the inference is there, not sure there's a need |
with maybe the exception of pattern matching? | |
i1 indexes / 3 is always decimal .33333 i2 indexes / 3 is always decimal .6666 i3 indexes / 3 is integer | |
another advantage of a flat triple store is for simple key/value lookups ie: if you have a html element <div id="20984"> where the contents of the div is "pick ie 20984" doesn't get any faster than that | |
And, this data is tight.. the 250,000 real world triples (manages a multi-million dollar electrical equipment rental biz, including 1500 contacts, invoicing, and 10,000s of items) is only 9mb... zips to 1.3mb | |
that's all the data a logic for a medium sized company stored on a floppy. | |
To make it practical for other operations too, the triple store should be defined so, that: 1) every triple is stored as a fourtuple, the additional field being a unique triple ID... Why not just use the index of the first part of the triple? ie: ["tweety" "isa" "canary" "Sylvester" "isa" "cat"...] so 1 and 4 are the two triples( i1) and the next two are i2 and i3 i1: [ "Tweety" [1] "Sylvester" [4][ i2: ["isa" [1 4] i3: ["canary" [ 1] "cat" [4]] | |
so to determine what Sylvester is.. s: i1/("Sylvester") >> [4] p: i2/("isa") >> [1 4] do an intesect to get "4" then the value = index ( 4) + 2 | |
Ladislav 20-May-2010 [271] | Why not just use the index of the first part of the triple? - yes, can be done that way, but if you need to remove the triples, you find out, that it may be practical |
Terry 20-May-2010 [272] | how does the 4th make it more practical? |
Ladislav 20-May-2010 [273] | how... - this way, you will have a triple ID that does not change even if other triples are removed. That is not true for your index. |
Terry 20-May-2010 [274] | If you remove a triple, you'd need to reindex everything, no? |
Ladislav 20-May-2010 [275] | actually, that would be costly |
Terry 20-May-2010 [276x2] | I'd tend to null it out instead, and reuse it later |
I have thought of a fourth ID for a triple before.. but it represents the triple itself. .. rarely needed it though. .there's always a way around | |
Ladislav 20-May-2010 [278] | Yes, sure, as I said, it is practical mainly in case, you want to have simple and fast triple delete operation |
Terry 20-May-2010 [279x2] | yeah.. that could remove some ambiguation |
er.. wait, there's no ambiguation.. http://en.wikipedia.org/wiki/Single_Source_of_Truth | |
Ladislav 20-May-2010 [281] | ...but you are right, that there is always a way around |
Terry 20-May-2010 [282x2] | In my old AtomDB i made in Rebol some time ago, I had an index for each value.. ie: the number "42" was stored only once, and used the index with the inference.. values: [ "42" "November" "Baseball"..] You could have a million birthdays recorded in "November", with a single element billbdaymonth: 2 frankbdaymonth: 2 frankage: 1 the answer to life the universe and everything: 1 |
It makes more sense for larger values.. nice when you can use a single symbol to represent a 1GB binary Anyway, thanks everyone for the help. | |
Ladislav 20-May-2010 [284] | Just BTW, I doubt, that INTERSECT is the fastest way how to intersect two hashes, might need some profiling too. |
Maxim 20-May-2010 [285] | terry, any benchmark results on a dense search with a few million records? also: how long is generating the index on that data and how much RAM is your triple index consuming? |
Steeve 20-May-2010 [286] | Ladislav, no prob with a map! |
Terry 20-May-2010 [287x4] | Max, building the indexing functions now |
indexing shouldn't take too long.. one foreach pass against the data | |
probably a faster way, but I'll let the gurus beat their chests over it :) | |
Izkata.. here's another option.. not sure how efficient it is.. orig: ["one" "two" "three" "four" "five" "six"] blk: [] foreach [ a b c ] orig [ d: array/initial [1] reduce [a b c] append blk d] | |
Izkata 20-May-2010 [291x2] | Well, if your doing it that way, "append/only blk reduce [a b c]" should be faster - no call to 'array May as well also have "blk: make block! (length? orig) / 3" - don't remember how much of a speedup that was in your previous tests, though |
you're* | |
Terry 20-May-2010 [293x11] | **Izkata beats his chest, challenging all comers ** :) |
Max: Indexing 244,327 triples took 4:20 on a rather snappy windows box with 6Gb RAM DB is consuming 114 Mb Ram | |
(Rebol 2) | |
1 GB RAM = 2,143,219 triples and their indexes (approx) | |
Within that data (real world) i have 3,733 cable triples ("dfuflfi33s", "isa" "cable") ("dfuflfi33s" isa a unique 10 byte ID that i stick on everything) Using the code posted earlier, i can pull a final block of indexes for each of those cables in 0.0012 seconds | |
oops.. decimal error. .012 , not .0012 | |
crawling with foreach to get the same result is .09 or about 9 times slower | |
pulling a single unique email address (out of 429) took 0.013 so the time seems somewhat fixed around 0.12 seconds regardless of the density | |
There's a fundamental flaw with this.. let say i find 3000 customers, but now i want to pull their email property foreach customers customer [ (pseudo) intersect the customer with i1 and email i2) once is fine.. but 3000 loops is really slow probably need to store the properties as objects data: [ "bob" [ email "[bob-:-example-:-com]" lname "Jones"] ...] then continue to index email, lname etc | |
OK, take all the noise above and throw it away.. i have the perfect solution (how's that for a tease?) | |
It ties in with my old AtomDB developments, but uses some of the methods we went over.. AtomDB used foreach to crawl.. fine for smaller datasets but too slow on large data | |
Maxim 20-May-2010 [304] | foreach is alway slower than find/part so whatever algorithm you use, replace it with that in its loop and it will be even better. |
Terry 20-May-2010 [305] | It's becoming apparent that these large datasets need to be reduced when it comes to FINDing strings imagine: ["one" "two" "three" "four" "five"... "one hundred million"] should be o: ["one" "one hundred million"] t; ["two" "three"] f: ["four" "five"] or even fo: ["four" "fox" "food"] fi: ["five" "fifty" "fiver niner"] pull out the first twh chars and there's your search block |
Graham 21-May-2010 [306] | You could make a complete hash of it. |
Terry 21-May-2010 [307] | http://sociopathways.files.wordpress.com/2009/12/hash-browns.jpg |
Terry 22-May-2010 [308x2] | Steeve, back to your method with some changes.. Some early benchmarks on this new method 3,000,000 objects subjects: [ [age 42 fname "Steeve"][ fname "Tweety" age 75] [url "http://rebol.com"] ... ] 3,000,000 key / value pairs that index the subjects block i1: [ "Steeve" 1 "Tweety" 2 "Rebol" 3 ... ] then.. theindex: select i1 "Rebol" out: pick subjects theindex I can do 1,000,000 of these in 1.3 seconds or one 769, 230 / s (older box with 2gb ram) then finish the other two triples index (preds vals) i2: [age [ 1 2] fname [1 2] url [ 3]] i3 should have it's values converted to md5s (as suggested) i3: [ #{12CE1E73A3EABF1623970A0C53B9CA1F} [3] ] |
The problem with the other method was the overall cost of looking up properties .. if all you needed was the single property, (ie all things that are age = 75) then that was fine, but usually you want more info. So if you found 5000 things that match the query, you'd need to run the whole process 5000 times to get all their email addresses. It just makes more practical sense to keep things as things, along with all of their properties. | |
older newer | first last |