World: r4wp
[Databases] group to discuss various database issues and drivers
older | first |
afsanehsamim 17-Nov-2012 [367x5] | i need codes for showing message to user ,it mean after each joining ,it should show user that value is correct or no |
guys ! could you plz tell me after comparing values of two tables how can we show the output on web page? | |
after writing queries :foreach row read/custom mysql://[root-:-localhost]/test ["select data.oneone,data1.oneone from data LEFT JOIN data1 ON data.oneone=data1.oneone"] [print row] foreach row read/custom mysql://[root-:-localhost]/test ["select data.onetwo,data1.onetwo from data LEFT JOIN data1 ON data.onetwo=data1.onetwo"] [print row] .... | |
i got this results:c c a none t t a none e none r none o none a none | |
now how can i write query for everyvalues which are same and print correct message on web page? | |
afsanehsamim 23-Nov-2012 [372x2] | hey guys... i have just 2days time for my project ! could you help me? |
i could not do the last step ... i should show result of comparing values on web page | |
TomBon 11-Dec-2012 [374] | a quick update on elasticsearch. Currently I have reached 2TB datasize (~85M documents) on a single node. Queries now starting to slow down but the system is very stable even under heavy load. While queries in average took between 50-250ms against a dataset around 1TB the same queries are now in a range between 900-1500 ms. The average allocated java heap is around 9GB which is nearly 100% of the max heap size by a 15 shards and 0 replicas setting. elasticsearch looks like a very good candidate for handling big data with a need for 'near realtime' analysis. Classical RDBMS like mysql and postgresql where grilled at around 150-500GB. Another tested candidate was MongoDB which was great too but since it stores all metadata and fields uncompressed the waste of diskspace was ridiculous high. Furthermore query execution times differs unexpectable without any known reason by factor 3. Tokyo Cabinet started fine but around 1TB I have noticed file integrity problems which leads into endless restoring/repairing procedures. Adding sharding logic by coding an additional layer wasn't very motivating but could solve this issue. Within the next six months the datasize should reached the 100TB mark. Would be interesting to see how elasticsearch will scale and how many nodes are nessesary to handle this efficiently. |
Maxim 11-Dec-2012 [375] | when you talk about "documents" what type of documents are they? |
Gregg 11-Dec-2012 [376] | Thanks for the info Tomas. |
TomBon 12-Dec-2012 [377] | crawled html/mime embedded documents/images etc. as plain compressed source (avg. 25kb) and 14 searchable metafields (ngram) to train different NN types for pattern recognition. |
Maxim 12-Dec-2012 [378] | thanks :-) |
MaxV 15-Jan-2013 [379] | I have a problem with RebDB: how works db-select/group? Example: >> db-select/where/group/count [ID title post date] archive [find post "t" ] [ID] ** User Error: Invalid number of group by columns ** Near: to error! :value |
Endo 15-Jan-2013 [380x2] | Don't you need to use aggregate functions when you grouping? |
* when you use grouping. | |
Scot 15-Jan-2013 [382x3] | I use the sql dialect like this: sql [select count [ID title post date] from archive group by [ID title post] where [find post "t"]] The trick with this particular query is the that the "count" selector must have exactly one more column than the "group by" selector. The first three elements [ID title post] are used to sort the output and the last element [date] is counted. output will be organized: ID title post count |
I would like to be able to include other columns in the output that are not part of the grouping or count, but I haven't figured out how to do this in RebDB. I have used a parse grammar on the output to achieve the desired result. | |
I would also like to query the results of a query, which I haven't figured out how to do so without creating and committing a new database. So I have used a parse grammar to merge two queries. | |
Pavel 25-Jun-2013 [385] | SQLite version 4 announced/proposed. The default built-in storage engine is a log-structured merge database instead of B-tree in SQlite3. As far as I understand the docs This store could be usable standalone or use SQL frontend. Google to SQLite4. |
Kaj 25-Jun-2013 [386] | Cool |
Endo 26-Jun-2013 [387] | I cannot see any announcement on the sqlite.org web site? SQLite 3.7.17 is the latest and recommended version? |
Kaj 26-Jun-2013 [388] | I saw code last year, but it's probably still in deep development |
Pavel 26-Jun-2013 [389] | Endo as I wrote google for SQLite4. direct link is: http://sqlite.org/src4/doc/trunk/www/design.wiki. There is a mirror of souces at https://github.com/jarredholman/sqlite4 also. |
Pekr 4-Jul-2013 [390x4] | Has anyone tried to work with ODBC under R3? I somehow can't load following ODBC driver DLL: https://github.com/gurzgri/r3-odbc |
Or differently, has anyone worked with excel files via ODBC, using either R2 or R3? I tried Graham's code, which works for .xls files, but not .xlsx files. When I convert my file to .xls, R2 returns - not enough memory :-( p: open [ scheme: 'ODBC target: "Driver={Microsoft Excel Driver (*.xls)};DriverId=790;Dbq=c:\path-to-file\file.xls" ] conn: first p insert conn "select * from [Sheet1$]" result: copy conn | |
As for R3 - maybe there was also some other R3 ODBC extension, somehow can't find it .... | |
hmm, found it, but no more available - http://www.rebol.org/ml-display-thread.r?m=rmlXJPF ... the problem with gurzgri DLL is, I can't somehow import it with any R3 version ... | |
Kaj 4-Jul-2013 [394x3] | What you found looks to be the latest version of that |
I've also had loading problems with R3 extensions on Linux that worked before. Sometimes you seem to need an older R3, sometimes a newer | |
If all else fails, recompile the C code | |
Pekr 4-Jul-2013 [397] | well, I have even old latest Carl's view.exe, does not work either ... lost battle here ... not fluent with recompile of ODBC DLL, does not imo guarantee, that loading it in R3 will actually work. I wonder if there was any change to import function or to extension mechanism itself ... |
Kaj 4-Jul-2013 [398] | Bug fixes, I think, but they also seem to cause compatibility regressions |
DocKimbel 4-Jul-2013 [399x2] | Do all your binding have Red-level interfaces now? |
I guess some like SDL don't need that. | |
Kaj 4-Jul-2013 [401] | Yes, it's in progress. Some like SQLite are one-to-one in Red like in Red/System. SDL is used more as a part in other low level bindings, such as OpenGL. OpenGL itself is waiting for floats in Red |
Pekr 4-Jul-2013 [402x3] | ok, so got valid ODBC connection string fro .xlsx files. R2 crashes when copying a data though ... p: open [ scheme: 'ODBC target: "Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=C:\Work\sales.xlsx;" ] |
ok, got it kind of working with the increase of p/locals/rows to 10K lines ... the excel sheets are so complex, that it does not return half of the info, it most probably counts on more columnar/db kind of data ... | |
pity Saphirion's excel dialect is not available for download anymore. Will try with Anton's old COMlib code ... | |
DocKimbel 4-Jul-2013 [405] | Sorry for the off-topic question, I though I was in another channel. |
Pekr 4-Jul-2013 [406] | You are used to Red channel being at the top, right? :-) |
DocKimbel 4-Jul-2013 [407:last] | Just forgot to check the channel name before posting. ;-) |
older | first |