World: r3wp
[SQLite] C library embeddable DB .
older newer | first last |
Pekr 14-Apr-2009 [964] | OK, index you ID field, then also user and domain field |
Janko 14-Apr-2009 [965] | I did before when trying stuff index|dom_user|domains|11|CREATE INDEX dom_user on domains ( domain, user ) index|dom1|domains|16|CREATE INDEX dom1 on domains ( domain ) index|user1|domains|21|CREATE INDEX user1 on domains ( user ) |
Pekr 14-Apr-2009 [966x3] | I would left out first index and add index for your primary key .... (not sure though :-) |
simply put - how db chooses, which index to use? You have them separate as well as mixed. I would use mixed index (domain,user) only if those two fields would be defined as a primary key together ... | |
... well, I am not good at sql internals, so .... try various combinations, and you'll see .. | |
Janko 14-Apr-2009 [969x4] | but I am sure even if I write that data inf ile as rebol blocks and load and foreach them to find the one it would take far less than 1s , it's just 180 rows!!!! :) |
(maybe I just need a better VPS) | |
hm.. very interesting results... and positive basically :) with 4000 records insert time, and update time for the same query is just the same as with 183 , and select is fast as before... then it's survivable .. I was afraid that if with so few rows it's so bad with couple more it will be unworkable | |
have to go..storm | |
sqlab 14-Apr-2009 [973] | If you update an index field, the index too has to be updated. Do you open and close your db, as it is recommended in the link you posted? Then you have to add the time for that too. |
Janko 14-Apr-2009 [974] | I had it without indexes at first , and later added indexes while I was trying various things, at 180 records there wasn't any noticable change. Well the result doesn't seem so bad to me right now.. if it has the same delay with 4000 records it's okey-ish. On my local computer which is much better than some small VPS I noticed no delays. I just realized that the delay at web-app was 3x bigger than this because I have 3 bots and each has it's own "mailbox" ... The solution for this situation will be affloading the inserts from the request process, for the future when things will need to scale up I will try doing this different anyway, that's why I was playing with actor like systems anyway |
Oldes 14-Apr-2009 [975x2] | 1s to 5s on 180 rows is bad. whatever you do. |
I'm not sqlite user but I would probably tried to use EXPLAIN to see, what's going on when you do the query - http://www.sqlite.org/lang_explain.html | |
Janko 14-Apr-2009 [977x2] | yes, I think so too.. it is interesting that the time looks the same on 4000 rows ... maybe reason for this is also because I am on some very cheap VPS (12EUR, 180MB ram, ? CPU) ... |
aha, thanks for idea.. I will use explain .. although it's so simple query that it shouldn't do any strange things I think | |
Oldes 14-Apr-2009 [979] | also... if you want to index domain and user, you should not use varchar without length. |
Janko 14-Apr-2009 [980x2] | aha, I will try that .. and I can use Integer for user ... because now each user get's folder like u<Int> ... and I could maybe put all 3 mailboxes into the same table so it would be only 1 insert / update instead of 3 on changes ... I didn't think performance will be the problem here (it still seems a little funny) , because it's just a simple table without any relations or anything and no big amunt of data |
I used sqlite here and there for more real DB work and I never seen any critical slownes (extept if you do a typical like inserting 100 rows each in it's own transaction (without begin commit), in fact it seemed always very fast to me ... thats why I suspect all this wouldn't show up if I had some better VPS. Also because if fluctuates so much I suspect disk on computer vps is on is maybe busy doing other stuff so at one moment it is idle and it works faster at another it waits for >3 seconds | |
Oldes 14-Apr-2009 [982] | Yes.. it's possible, that the VPS does many other disk IO oparations so you have to wait. |
Janko 14-Apr-2009 [983] | but the changes you proposed will help me get it faster anyway and I will try them |
Pekr 14-Apr-2009 [984] | you can use varchar even for indexes, it should speed up things significantly ... |
amacleod 16-Apr-2009 [985] | Is there a way to reorder columns in SQLITE? |
Janko 16-Apr-2009 [986] | alter table supports just renaming table and adding rows, for all else you need to create new table -> insert .. select ... ; data and drop old and rename new |
sqlab 16-Apr-2009 [987] | In sql there is no need for reordering the column order, as you can get any order you desire by using the column names in your select statement. |
amacleod 16-Apr-2009 [988x3] | That sounds helpful..thanks sqlab. |
Can you do the same for inserting values? I know I can do it for updating row data but I can not figure out the syntax for inserting.. | |
For example: SQL reduce [{UPDATE books SET bk=?, chap=? WHERE ref_number=?} blk/2 blk/3 blk/1] | |
sqlab 16-Apr-2009 [991] | should be like insert into table (col3, col2, col1) values (?, ?, ?) |
amacleod 16-Apr-2009 [992] | works...thanks a lot. I had this fear of having to change large parts of my code if I added or removed columns...This solves that problem. |
sqlab 21-Apr-2009 [993] | Janko: I just tested again two concurrent processes writing into one table. This time I used the approach connecting and closing for each statement, as the link you gave suggested.. I still got locking. |
Janko 21-Apr-2009 [994] | I understood the text in link that if you get a lock, you wait for a while and try again.. and that by this method even it scales to quite big number of concurr processes compared to mysql for example |
sqlab 21-Apr-2009 [995] | trying again is already implemented in sqlite.r in the function do-step. I expanded the wait time with a random duration up to one second. still not good enough And now additionaly I connected and disconnected according the link . Again I encountered locks |
Janko 21-Apr-2009 [996x2] | aha ... hm.. I havent tried it myself yet |
how many times do you retry? | |
sqlab 21-Apr-2009 [998] | a few 10k times |
Janko 21-Apr-2009 [999] | what disconnection and connecting? :) |
sqlab 21-Apr-2009 [1000] | sorry, that was the number of statements until a lockk occurs |
Janko 21-Apr-2009 [1001] | aha .. does it at least signidicantly reduce the occurence of the lock or it's the same as no retry? |
sqlab 21-Apr-2009 [1002x2] | do-step seems to retry until not busy |
yes, it reduces, but they happen still | |
Janko 21-Apr-2009 [1004] | He tried only 100 times (by looking at the example he gave) .. and I don't know what he means by 500 >>> in line with mysql and others- at 500+ simultaneous users you start to get about a 10% drop because of lock overhead. <<< Most probably not 500 processes writing all the time ... without delays |
sqlab 21-Apr-2009 [1005x2] | my mistake, I forgot to set the number of retries during connect. I will test again |
no difference, sooner than later I get a lock, even with 100 retries after less than 1000 statements | |
Janko 21-Apr-2009 [1007x2] | hm.. too bad |
how are you testing it.. how many writes/sec .. processes? | |
Robert 21-Apr-2009 [1009x3] | sqlab, use a proxy process that does the priorization. Send requests from client to proxy-process and this one handles 1 connection to the database. |
IMO you are much flexibler anyway. | |
You can add things like priority, read-only support etc. | |
sqlab 21-Apr-2009 [1012x2] | Maybe my test settings are different from your requirements. During testing I let two concurrent processes insert a record with three fields, one ip address, a timestamp and one text field with variable length from 1k to 8k., and this i do with full speed |
I get up to 200 inserts with just one process, with two concurrent process this slows down to 30 to 60 per second | |
older newer | first last |