How often are you supposed to commit and close connection in SQLite? Is it for every statement you execute...

How often are you supposed to commit and close connection in SQLite? Is it for every statement you execute, or just occasionally? What's the best practice?

Other urls found in this thread:

github.com/flaschbier/StillLearningPython/blob/master/transaction/try.py
pythoncentral.io/introduction-to-sqlite-in-python/
h2database.com/html/main.html
sqlite.org/draft/wal.html
grox.net/doc/sqlite/lang_transaction.html
twitter.com/NSFWRedditGif

don't know about SQLite, but I believe best practice is to have a constant connection pool and send statements through them

for sqlite depends on whether you have concurrent readers/writers. you don't want to lock the database for too long so it's always going to be a tradeoff between boxing as many statements as possible, what you care about in ACID basically.

github.com/flaschbier/StillLearningPython/blob/master/transaction/try.py

I was going by this tutorial: pythoncentral.io/introduction-to-sqlite-in-python/

He's always doing like:

db = sqlite3.connect('cocks.db')cursor = db.cursor()

And then executes the SQL statement with cursor.execute. Turns out you can do db.execute too, not quite sure what the point of using cursor is unless you're doing a really complicated operation but w/e. After this he goes:

db.commit()db.close()

I think commit is the critical step where your data can get corrupted. But I don't really know shit about databases, I just needed some copypasta to get my project going so I used these two stanzas blindly. My program executes SQL statements (especially SELECT and UPDATE) a lot. It will do some work, get the result, put it into the SQL, then depending on conditionals do some other work, and also put that into the db too. Which means several execute()'s, all separated by somewhat non-trivial code, so I figure I shouldn't just open the db once at the program start and close it just before exiting, since what if it crashes without closing it or something. So I surround each execute() call separately, which adds 4 lines of boilerplate to each single execute() line. Not great for readability.

I could just make a function that adds the boilerplate to be sure, but is this constant opening and closing bad? My scale is tiny so it doesn't really matter, but I'm learning it anyway, might as well learn to do it right.


By locking do you mean the db.connect() and db.close()? What's a good rule of thumb for how long is too long?

Also I keep having to repeat the column names in the db and the python variable names. It seems like 99% of the time the value from the db field "my_col_name" is going to be stored in variable "my_col_name". But going by the tutorial I was using:
[/code]cursor.execute(SELECT name, email, phone FROM users)
for row in cursor:
# row[0] returns the first column in the query (name), row[1] returns email column.
print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))[/code]
I have to write this ugly error-prone code where values are mapped based on their position in the tuple representing a row of values. I feel like there's gotta be a better way of doing these. I wish Python could just see the db schema and automatically have it go into appropriately structured objects.

I remember C# had some neat DataTable/DataSet stuff with type safety and so on, but their ORM seemed really confusing to setup with a non-MS database like SQL.

So basically the things I know are caveman db programming and proprietary cancer. How do normal people program db logic?

Fucked up the code, should have been:
cursor.execute(SELECT name, email, phone FROM users)for row in cursor: # row[0] returns the first column in the query (name), row[1] returns email column. print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))

you lock when you issue an UPDATE/INSERT with those bindings, iirc. commit ends the transaction.

For sqlite, all bets are off as there is no real concept of who is responsible for maintenance tasks. Don't use it for anything serious. For example, in a WAL database, crash one client during a transaction. Now you have a WAL file that grows with every future transaction forever as nothing will ever clear up the checkpoint.

also, yeah, I was going to get into this last night.
for storing stuff locally for an application, it's alright, but you're better off migrating to postgres or something not shit.
even mysql is better. sqlite is good for testing, I guess.

Isn't the big selling point the lack of running a server? Especially if you don't have admin permissions.

I always thought it was being lightweight for small-scale use.

SQL is easy as fuck to run with Docker. Although you probably can't do Docker if admin rights are a problem.

The point of SQLite is to replace basic open() and write() calls to a file: when you use these calls in your program, think about using SQLite instead. If you're going to run into situations that involving shared data contention and timing issues, you need to manually reconcile this situation by yourself, you can't rely on SQLite to do it for you.

It's for either:
a) Lightweight, small-scale use
b) Quick prototyping so that you don't have to run an SQL Server

Trying to use it for anything serious or multi-threaded is a yuge mistake imo.

You commit if you need the data now, close if the program/script has done its thing.

Ah, so it sounds like SQLite is really bad for any situation where you actually need a database. Funny that it's used so much anyway.


So it's commit just before reading, not commit early commit often?

It's not bad, but if you're using it how it's not mean to be used then of course it's going to suck, pajeet.

No, SQLite is reasonably fast, even when compared to SQL servers, for the intended usage. SQLite can be reasonable or even superior choice for some applications. If you don't need a lot of data (tens of TB) in one database, high concurrency, multiple servers, or some specific advanced feature of some SQL server, you should consider SQLite.

Use h2
h2database.com/html/main.html

You can use WAL to create concurrency no?
sqlite.org/draft/wal.html

I use SQLite on a production server, it's alright.
Do you think it will break at some day?

use mongodb it's webscale

no
where are you getting these ideas from.
depending on your bindings, it might defer or be an immediate transaction, and thus might lock the database.
just fucking commit after you've finished with the query when WRITING to the database. you don't have to commit when reading from the database. quit being a retard and read the fucking docs, and the docs of your bindings/ORM
grox.net/doc/sqlite/lang_transaction.html

Not at all. SQLite is production ready. However, you need to understand the proper context in where to use it. It's perfectly reasonable to use both SQLite and a bigger SQL database system (like PostgreSQL or MariaDB) because neither of them conflict in their usage.

Entity framework.

commit is for transactions. its so you can atomically modify data. for instance lets say your code needs to update data in 3 tables. you start a transaction, modify the data and then comit it. if you were to experience an error while updating table number 2, you would roll back the transaction. the changes arent actually saved to the database until you comit the transaction. it also resolves issues with race conditions when you have multiple threads/processes modifing the same database.

pretty much this. the following query uses a cursor to enumerate all the table/column pairs in MSSQL and then runs some dynamic SQL to search for values in them. I use this to search for data in databases im unfamiliar with.

declare @SearchValue varchar(max)declare @SearchGUIDs bituse OperationsManagerset @SearchValue = '67c0d5b3-9a9b-c81e-55e5-c9756959dea9'set @SearchGUIDs = 1if object_id('tempdb..#Results') is not null drop table #Resultscreate table #Results ( [Schema] varchar(max), [Table] varchar(max), [Column] varchar(max), [Data Type] varchar(max))declare @Message varchar(max)declare @SchemaName sysnamedeclare @TableName sysnamedeclare @ColumnName sysnamedeclare @ColumnDataType sysnamedeclare TableCursor cursor forselect sys.schemas.name, sysobjects.name, syscolumns.Name, systypes.namefrom sysobjectsjoin syscolumns on sysobjects.id = syscolumns.idjoin systypes on systypes.xtype = syscolumns.xtype and systypes.name != 'sysname'join sys.schemas on sysobjects.uid = sys.schemas.schema_idwhere sysobjects.xtype = 'U' order by sys.schemas.name, sysobjects.name, syscolumns.nameopen TableCursorfetch next from TableCursor into @SchemaName, @TableName, @ColumnName, @ColumnDataTypewhile @@fetch_status = 0begin set @Message = cast(getdate() as varchar(max)) + ' - ' + @SchemaName + '.' + @TableName + ' - ' + @ColumnName + ' (' + @ColumnDataType + ') ' if @ColumnDataType in ('char', 'varchar', 'text', 'nchar', 'nvarchar', 'ntext', 'binary', 'varbinary', 'image') begin raiserror(@Message, 0, 0) with nowait exec (' if exists ( select top 1 [' + @ColumnName + '] from [' + @SchemaName + '].[' + @TableName + '] where [' + @ColumnName + '] like ''%'' + ''' + @SearchValue + ''' + ''%'') insert into #Results values (''' + @SchemaName + ''',''' + @TableName + ''', ''' + @ColumnName + ''', ''' + @ColumnDataType + ''')') end if @ColumnDataType in ('xml') begin raiserror(@Message, 0, 0) with nowait exec (' if exists ( select top 1 [' + @ColumnName + '] from [' + @SchemaName + '].[' + @TableName + '] where cast([' + @ColumnName + '] as varchar(max)) like ''%'' + ''' + @SearchValue + ''' + ''%'') insert into #Results values (''' + @SchemaName + ''',''' + @TableName + ''', ''' + @ColumnName + ''', ''' + @ColumnDataType + ''')') end if @ColumnDataType = 'uniqueidentifier' and @SearchGUIDs = 1 begin raiserror(@Message, 0, 0) with nowait exec (' if exists ( select top 1 [' + @ColumnName + '] from [' + @schemaName + '].[' + @TableName + '] where [' + @ColumnName + '] = ''' + @SearchValue + ''') insert into #Results values (''' + @SchemaName + ''',''' + @TableName + ''', ''' + @ColumnName + ''', ''' + @ColumnDataType + ''')') end fetch next from TableCursor into @SchemaName, @TableName, @ColumnName, @ColumnDataTypeendclose TableCursordeallocate TableCursorselect * from #Resultsdrop table #Results