World: r3wp
[SQLite] C library embeddable DB .
older newer | first last |
Oldes 14-Apr-2009 [975x2] | 1s to 5s on 180 rows is bad. whatever you do. |
I'm not sqlite user but I would probably tried to use EXPLAIN to see, what's going on when you do the query - http://www.sqlite.org/lang_explain.html | |
Janko 14-Apr-2009 [977x2] | yes, I think so too.. it is interesting that the time looks the same on 4000 rows ... maybe reason for this is also because I am on some very cheap VPS (12EUR, 180MB ram, ? CPU) ... |
aha, thanks for idea.. I will use explain .. although it's so simple query that it shouldn't do any strange things I think | |
Oldes 14-Apr-2009 [979] | also... if you want to index domain and user, you should not use varchar without length. |
Janko 14-Apr-2009 [980x2] | aha, I will try that .. and I can use Integer for user ... because now each user get's folder like u<Int> ... and I could maybe put all 3 mailboxes into the same table so it would be only 1 insert / update instead of 3 on changes ... I didn't think performance will be the problem here (it still seems a little funny) , because it's just a simple table without any relations or anything and no big amunt of data |
I used sqlite here and there for more real DB work and I never seen any critical slownes (extept if you do a typical like inserting 100 rows each in it's own transaction (without begin commit), in fact it seemed always very fast to me ... thats why I suspect all this wouldn't show up if I had some better VPS. Also because if fluctuates so much I suspect disk on computer vps is on is maybe busy doing other stuff so at one moment it is idle and it works faster at another it waits for >3 seconds | |
Oldes 14-Apr-2009 [982] | Yes.. it's possible, that the VPS does many other disk IO oparations so you have to wait. |
Janko 14-Apr-2009 [983] | but the changes you proposed will help me get it faster anyway and I will try them |
Pekr 14-Apr-2009 [984] | you can use varchar even for indexes, it should speed up things significantly ... |
amacleod 16-Apr-2009 [985] | Is there a way to reorder columns in SQLITE? |
Janko 16-Apr-2009 [986] | alter table supports just renaming table and adding rows, for all else you need to create new table -> insert .. select ... ; data and drop old and rename new |
sqlab 16-Apr-2009 [987] | In sql there is no need for reordering the column order, as you can get any order you desire by using the column names in your select statement. |
amacleod 16-Apr-2009 [988x3] | That sounds helpful..thanks sqlab. |
Can you do the same for inserting values? I know I can do it for updating row data but I can not figure out the syntax for inserting.. | |
For example: SQL reduce [{UPDATE books SET bk=?, chap=? WHERE ref_number=?} blk/2 blk/3 blk/1] | |
sqlab 16-Apr-2009 [991] | should be like insert into table (col3, col2, col1) values (?, ?, ?) |
amacleod 16-Apr-2009 [992] | works...thanks a lot. I had this fear of having to change large parts of my code if I added or removed columns...This solves that problem. |
sqlab 21-Apr-2009 [993] | Janko: I just tested again two concurrent processes writing into one table. This time I used the approach connecting and closing for each statement, as the link you gave suggested.. I still got locking. |
Janko 21-Apr-2009 [994] | I understood the text in link that if you get a lock, you wait for a while and try again.. and that by this method even it scales to quite big number of concurr processes compared to mysql for example |
sqlab 21-Apr-2009 [995] | trying again is already implemented in sqlite.r in the function do-step. I expanded the wait time with a random duration up to one second. still not good enough And now additionaly I connected and disconnected according the link . Again I encountered locks |
Janko 21-Apr-2009 [996x2] | aha ... hm.. I havent tried it myself yet |
how many times do you retry? | |
sqlab 21-Apr-2009 [998] | a few 10k times |
Janko 21-Apr-2009 [999] | what disconnection and connecting? :) |
sqlab 21-Apr-2009 [1000] | sorry, that was the number of statements until a lockk occurs |
Janko 21-Apr-2009 [1001] | aha .. does it at least signidicantly reduce the occurence of the lock or it's the same as no retry? |
sqlab 21-Apr-2009 [1002x2] | do-step seems to retry until not busy |
yes, it reduces, but they happen still | |
Janko 21-Apr-2009 [1004] | He tried only 100 times (by looking at the example he gave) .. and I don't know what he means by 500 >>> in line with mysql and others- at 500+ simultaneous users you start to get about a 10% drop because of lock overhead. <<< Most probably not 500 processes writing all the time ... without delays |
sqlab 21-Apr-2009 [1005x2] | my mistake, I forgot to set the number of retries during connect. I will test again |
no difference, sooner than later I get a lock, even with 100 retries after less than 1000 statements | |
Janko 21-Apr-2009 [1007x2] | hm.. too bad |
how are you testing it.. how many writes/sec .. processes? | |
Robert 21-Apr-2009 [1009x3] | sqlab, use a proxy process that does the priorization. Send requests from client to proxy-process and this one handles 1 connection to the database. |
IMO you are much flexibler anyway. | |
You can add things like priority, read-only support etc. | |
sqlab 21-Apr-2009 [1012x2] | Maybe my test settings are different from your requirements. During testing I let two concurrent processes insert a record with three fields, one ip address, a timestamp and one text field with variable length from 1k to 8k., and this i do with full speed |
I get up to 200 inserts with just one process, with two concurrent process this slows down to 30 to 60 per second | |
Robert 21-Apr-2009 [1014] | I'm pretty sure a proxy process can handle 200req/s at the frontend and backend. So if 200req/s is the maximum you can get from one process, adding more processes just devides this down. But it won't scale up. |
sqlab 21-Apr-2009 [1015] | Robert, that could be a solution. Unfortunately I observed a crosstalk of events with many ip connections, means one process is awakened without data available, if I send data additionaly on the same server via ip |
Robert 21-Apr-2009 [1016] | Don't understand the problem. Is it realted to my suggestion or your current try? |
sqlab 21-Apr-2009 [1017x2] | If I have some processes serving incoming events and sending their data to a central storage process, the central process sometimes seems to get an event, but not the data with that event. When the data really arrives, I do not get the event. Maybe he got the first event meant for an other connection |
As I know, that there should be data available, I read again after timeout | |
Robert 21-Apr-2009 [1019] | Event = TCP/IP event? |
sqlab 21-Apr-2009 [1020] | yes |
Robert 21-Apr-2009 [1021] | Hm... normaly there should be event/data |
sqlab 21-Apr-2009 [1022] | normally there is, but not always. Most socket connections are static, they stay connected for hours or longer. |
Pekr 30-Apr-2009 [1023] | Reading thru SQLite changelog found this - http://sqlite.org/rtree.html ... not sure of its practical usability, but some of you might find it interesting :-) |
Maxim 30-Apr-2009 [1024] | I love the sqlite motto :-) Small. Fast. Reliable. Choose any three. |
older newer | first last |