From django doc:
SQLite is meant to be a lightweight
database, and thus can’t support a
high level of concurrency.
OperationalError: database is locked
errors indicate that your application
is experiencing more concurrency than
sqlite can handle in default
configuration. This error means that
one thread or process has an exclusive
lock on the database connection and
another thread timed out waiting for
the lock the be released.Python’s SQLite wrapper has a default
timeout value that determines how long
the second thread is allowed to wait
on the lock before it times out and
raises the OperationalError: database
is locked error.If you’re getting this error, you can
solve it by:
- Switching to another database backend. At a certain point SQLite becomes too “lite” for real-world applications, and these sorts of concurrency errors indicate you’ve reached that point.
- Rewriting your code to reduce concurrency and ensure that database transactions are short-lived.
- Increase the default timeout value by setting the timeout database option
http://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errorsoption
user1857492
6721 gold badge7 silver badges21 bronze badges
answered Jul 3, 2010 at 21:33
patrickpatrick
6,4337 gold badges43 silver badges65 bronze badges
9
In my case, It was because I open the database from SQLite Browser. When I close it from the browser, the problem is gone.
answered Sep 12, 2016 at 11:21
Aminah NurainiAminah Nuraini
17.8k8 gold badges88 silver badges106 bronze badges
3
I slightly disagree with the accepted answer which, by quoting this doc, implicitly links OP’s problem (Database is locked
) to this:
Switching to another database backend. At a certain point SQLite becomes too “lite” for real-world applications, and these sorts of concurrency errors indicate you’ve reached that point.
This is a bit “too easy” to incriminate SQlite for this problem (which is very powerful when correctly used; it’s not only a toy for small databases, fun fact: An SQLite database is limited in size to 140 terabytes
).
Unless you have a very busy server with thousands of connections at the same second, the reason for this Database is locked
error is probably more a bad use of the API, than a problem inherent to SQlite which would be “too light”. Here are more informations about Implementation Limits for SQLite.
Now the solution:
I had the same problem when I was using two scripts using the same database at the same time:
- one was accessing the DB with write operations
- the other was accessing the DB in read-only
Solution: always do cursor.close()
as soon as possible after having done a (even read-only) query.
Here are more details.
answered Nov 25, 2018 at 17:43
BasjBasj
40.8k94 gold badges370 silver badges647 bronze badges
8
The practical reason for this is often that the python or django shells have opened a request to the DB and it wasn’t closed properly; killing your terminal access often frees it up. I had this error on running command line tests today.
Edit: I get periodic upvotes on this. If you’d like to kill access without rebooting the terminal, then from commandline you can do:
from django import db
db.connections.close_all()
answered Oct 22, 2013 at 11:52
WithnailWithnail
3,0892 gold badges30 silver badges47 bronze badges
5
As others have told, there is another process that is using the SQLite file and has not closed the connection. In case you are using Linux, you can see which processes are using the file (for example db.sqlite3
) using the fuser
command as follows:
$ sudo fuser -v db.sqlite3
USER PID ACCESS COMMAND
/path/to/db.sqlite3:
user 955 F.... apache2
If you want to stop the processes to release the lock, use fuser -k
which sends the KILL
signal to all processes accessing the file:
sudo fuser -k db.sqlite3
Note that this is dangerous as it might stop the web server process in a production server.
Thanks to @cz-game for pointing out fuser
!
vvvvv
23.7k19 gold badges47 silver badges74 bronze badges
answered May 6, 2019 at 7:34
mrtsmrts
16.1k8 gold badges87 silver badges72 bronze badges
2
I encountered this error message in a situation that is not (clearly) addressed by the help info linked in patrick’s answer.
When I used transaction.atomic()
to wrap a call to FooModel.objects.get_or_create()
and called that code simultaneously from two different threads, only one thread would succeed, while the other would get the “database is locked” error. Changing the timeout database option had no effect on the behavior.
I think this is due to the fact that sqlite cannot handle multiple simultaneous writers, so the application must serialize writes on their own.
I solved the problem by using a threading.RLock
object instead of transaction.atomic()
when my Django app is running with a sqlite backend. That’s not entirely equivalent, so you may need to do something else in your application.
Here’s my code that runs FooModel.objects.get_or_create
simultaneously from two different threads, in case it is helpful:
from concurrent.futures import ThreadPoolExecutor
import configurations
configurations.setup()
from django.db import transaction
from submissions.models import ExerciseCollectionSubmission
def makeSubmission(user_id):
try:
with transaction.atomic():
e, _ = ExerciseCollectionSubmission.objects.get_or_create(
student_id=user_id, exercise_collection_id=172)
except Exception as e:
return f'failed: {e}'
e.delete()
return 'success'
futures = []
with ThreadPoolExecutor(max_workers=2) as executor:
futures.append(executor.submit(makeSubmission, 296))
futures.append(executor.submit(makeSubmission, 297))
for future in futures:
print(future.result())
answered Sep 6, 2019 at 18:14
EvanEvan
2,2622 gold badges19 silver badges20 bronze badges
2
I got this error when using a database file saved under WSL (\wsl$ …) and running a windows python interpreter.
You can either not save the database in your WSL-tree or use a linux based interpreter in your distro.
answered May 15, 2021 at 14:44
JoeJoe
732 silver badges4 bronze badges
1
I was facing this issue in my flask app because I opened the database in SQLite Browser and forgot to write the changes.
If you have also made any changes in SQLite Browser, then click on write changes and everything will be fine.
answered Oct 19, 2021 at 17:19
This also could happen if you are connected to your sqlite db via dbbrowser plugin through pycharm. Disconnection will solve the problem
answered Jan 7, 2019 at 15:21
For me it gets resolved once I closed the django shell which was opened using python manage.py shell
answered Feb 6, 2019 at 2:59
Avinash RajAvinash Raj
172k27 gold badges229 silver badges272 bronze badges
I’ve got the same error! One of the reasons was the DB connection was not closed.
Therefore, check for unclosed DB connections. Also, check if you have committed the DB before closing the connection.
answered Sep 2, 2019 at 13:11
I had a similar error, right after the first instantiation of Django (v3.0.3). All recommendations here did not work apart from:
- deleted the
db.sqlite3
file and lose the data there, if any, python manage.py makemigrations
python manage.py migrate
Btw, if you want to just test PostgreSQL:
docker run --rm --name django-postgres
-e POSTGRES_PASSWORD=mypassword
-e PGPORT=5432
-e POSTGRES_DB=myproject
-p 5432:5432
postgres:9.6.17-alpine
Change the settings.py
to add this DATABASES
:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'myproject',
'USER': 'postgres',
'PASSWORD': 'mypassword',
'HOST': 'localhost',
'PORT': '5432',
}
}
…and add database adapter:
pip install psycopg2-binary
Then the usual:
python manage.py makemigrations
python manage.py migrate
answered Feb 14, 2020 at 22:50
KyrKyr
5,3632 gold badges27 silver badges22 bronze badges
Check if your database is opened on another DB Browser.
If it is opened on an other application, then close the application and run the program again.
Vega
27.6k27 gold badges94 silver badges101 bronze badges
answered Jun 5, 2021 at 9:03
Just close (stop) and open (start) the database. This solved my problem.
answered May 6, 2020 at 21:35
doğukandoğukan
21.7k13 gold badges54 silver badges69 bronze badges
I found this worked for my needs. (thread locking) YMMV
conn = sqlite3.connect(database, timeout=10)
https://docs.python.org/3/library/sqlite3.html
sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])
When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).
answered Aug 18, 2020 at 4:04
In my case, I added a new record manually saved and again through shell tried to add new record this time it works perfectly check it out.
In [7]: from main.models import Flight
In [8]: f = Flight(origin="Florida", destination="Alaska", duration=10)
In [9]: f.save()
In [10]: Flight.objects.all()
Out[10]: <QuerySet [<Flight: Flight object (1)>, <Flight: Flight object (2)>, <Flight: Flight object (3)>, <Flight: Flight object (4)>]>
Tonechas
13.3k15 gold badges45 silver badges80 bronze badges
answered Nov 21, 2020 at 8:54
In my case, I had not saved a database operation I performed within the SQLite Browser. Saving it solved the issue.
answered Sep 13, 2018 at 21:41
A very unusual scenario, which happened to me.
There was infinite recursion, which kept creating the objects.
More specifically, using DRF, I was overriding create method in a view, and I did
def create(self, request, *args, **kwargs):
....
....
return self.create(request, *args, **kwargs)
answered Apr 19, 2020 at 20:33
LonerLoner
1684 silver badges10 bronze badges
Already lot of Answers are available here, even I want to share my case , this may help someone..
I have opened the connection in Python API to update values, I’ll close connection only after receiving server response. Here what I did was I have opened connection to do some other operation in server as well before closing the connection in Python API.
answered Jun 27, 2020 at 9:02
indevindev
1152 silver badges8 bronze badges
If you get this error while using manage.py shell
, one possible reason is that you have a development server running (manage.py runserver
) which is locking the database. Stoping the server while using the shell has always fixed the problem for me.
answered Aug 4, 2020 at 13:09
actually I have faced same problem , when I use “transaction.atomic() with select_for_update() ” i got error message “the OperationalError: database is locked” ,
and after many tries / searching / read django docs ,
i found the problem from SQLite itself it is not support select_for_update method as django DOCs says , kindly have a look at the following url and read it deeply:
https://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errors
, and when i moved to MySQL everything goes fine .
as django DOCs also says “database is locked” may happen when database timeout occur ,
they recommend you to change database timeout by setting up the following option :
'OPTIONS': {
# ...
'timeout': 20,
# ...
}
finally, I recommend you to use MySQL/PostgreSQL even if you working on development environment .
I hope this helpful for you .
answered Mar 23, 2021 at 4:27
K.AK.A
1,25111 silver badges21 bronze badges
I got this error when attempting to create a new table in SQLite but the session
object contained uncommitted (though flushed) changes.
Make sure to either:
- Commit the session(s) before creating a new table
- Close all sessions and perform the table creation in a new connection
- …
answered May 11, 2021 at 8:39
mibmmibm
1,3182 gold badges13 silver badges23 bronze badges
@Shilp Thapak’s answer is correct: the reason for the error is that you did not write your manual changes to the data in your DB Browser for SQLite before running your application.
If you didn’t write the changes in whatever SQL client you are using, you can still create the engine but
will throw the operational error about the database being locked.
You can check whether your engine can connect by checking the existence of a rollback journal. The default mode of a rollback journal is to be created and deleted at the start and end of a transaction.
It is exists in the same directory where your database is, it has the same name as the database file and the suffix “-journal” appended.
If the mode is not changed, at Journal mode in Edit pragmas panel in DB Browser for SQLite.
You can check the existence of the temp file like so:
if os.path.isfile('your-database.sqlite-journal'):
print("The database is locked. Please write your changes in your SQL client before proceeding.n")
Read more about temporary files here.
So no need to close the server or DB Browser for SQLite for that sake. In fact, as long as all the changes are written, you can have several clients connected to the database simultaneously and still run your application at the same time.
answered Jan 21, 2022 at 12:52
RubénRubén
113 bronze badges
For me it was simply because I was accessing the database in SQLite app at the same time of running my Python code to create a new table.
Closing SQLite until the code is done solved my issue.
answered Mar 8, 2022 at 8:56
if youre using more than one development editors working on that poject, it may cause the same error.
for example, spyder and vscode running the same ‘prj.py’ file, creates a conflict in some processes, like deleting records.
answered Apr 12 at 15:33
UPDATE django version 2.1.7
I got this error sqlite3.OperationalError: database is locked
using pytest
with django
.
Solution:
If we are using @pytest.mark.django_db
decorator. What it does is create a in-memory-db
for testing.
Named: file:memorydb_default?mode=memory&cache=shared
We can get this name with:
from django.db import connection
db_path = connection.settings_dict['NAME']
To access this database and also edit it, do:
Connect to the data base:
with sqlite3.connect(db_path, uri=True) as conn:
c = conn.cursor()
Use uri=True
to specifies the disk file that is the SQLite database to be opened.
To avoid the error activate transactions in the decorator:
@pytest.mark.django_db(transaction=True)
Final function:
from django.db import connection
@pytest.mark.django_db(transaction=True)
def test_mytest():
db_path = connection.settings_dict['NAME']
with sqlite3.connect(db_path, uri=True) as conn:
c = conn.cursor()
c.execute('my amazing query')
conn.commit()
assert ... == ....
answered Apr 17, 2019 at 21:38
virtualdvidvirtualdvid
2,2933 gold badges14 silver badges32 bronze badges
1
Just reboot your server, it will clear all current processes that have your database locked.
answered Aug 4, 2020 at 22:29
1
I just needed to add alias sqlite='sqlite3'
to my ~/.zshrc
I then deleted the partially-failed creation of the virtualenv
in ~/.pyenv/versions/new-virtualenv
and reran pyenv virtualenv <name>
and it worked swimmingly
answered Oct 9, 2022 at 0:03
try this command:
sudo fuser -k 8000/tcp
answered Jan 15, 2018 at 11:56
cz gamecz game
811 silver badge5 bronze badges
1
Здравствуйте, уже как месяц встречаю каждый день ошибку:
sqlite3.OperationalError: database is locked
База данных уходит в блокировку и проект дальше не функционирует , транзакции фиксирую сразу после обновлений данных .
Но ошибка всё же остаётся и выскакивает после дня работы бота спонтанно, решение только пока-что одно
Перезапускать проект .
Пробывал принтами искать ошибку , но транзакции фиксируются нормально и базу не оставляю открытой. Ошибка я так понимаю из за большой нагрузки на базу . Переход на другие базы не вариант для меня
-
Вопрос заданболее года назад
-
1968 просмотров
Пригласить эксперта
Ошибка я так понимаю из за большой нагрузки на базу . Переход на другие базы не вариант для меня
Тогда терпеть и отлавливать sqlite3.OperationalError
с соответствующей обработкой
Для бота лучше конечно полноценную базу использовать. Но раз критично именно sqlite, то ознакомьтесь с нюансами блокировок в официальном wiki
Как временное решение можете с timeout поиграться:
conn = sqlite3.connect(database, timeout=7) # стандартный таймаут 5 сек
Как осуществляется доступ к базе данных? Потоки есть? как они работают с БД?
SQLite это однопользовательская БД с одной точкой доступа. У неё нет понятия сессий и доступа к ней из разных потоков и тем более под разными учетными записями. Считайте что наливаете тысячи литров воды в бассеин через угольное ушко, используя сифон (воронку).
и где весь стэк? хотяб по внутренностям пакета sqlite чтобы видеть что вы дергаете и с чем оно падает.
-
Показать ещё
Загружается…
21 мая 2023, в 12:32
12500 руб./за проект
21 мая 2023, в 11:57
8000 руб./за проект
21 мая 2023, в 11:52
12500 руб./за проект
Минуточку внимания
ОБНОВЛЕНИЕ django версия 2.1.7
Я получил эту ошибку sqlite3.OperationalError: database is locked
с помощью pytest
с django
.
Решение:
Если мы используем @pytest.mark.django_db
декоратор. Что он делает, так это создает in-memory-db
для тестирования.
По имени: file:memorydb_default?mode=memory&cache=shared
Мы можем получить это имя с помощью:
from django.db import connection
db_path = connection.settings_dict['NAME']
Чтобы получить доступ к этой базе данных, а также отредактировать ее, выполните:
Подключиться к базе данных:
with sqlite3.connect(db_path, uri=True) as conn:
c = conn.cursor()
Используйте uri=True
чтобы указать файл диска, который является базой данных SQLite, которую нужно открыть.
Чтобы избежать ошибки, активируйте транзакции в декораторе:
@pytest.mark.django_db(transaction=True)
Финальная функция:
from django.db import connection
@pytest.mark.django_db(transaction=True)
def test_mytest():
db_path = connection.settings_dict['NAME']
with sqlite3.connect(db_path, uri=True) as conn:
c = conn.cursor()
c.execute('my amazing query')
conn.commit()
assert ... == ....
Back to top
Toggle table of contents sidebar
This error occurs when more than one process is using the same session file, that is, when you run two or more clients
at the same time using the same session name or in case another program has accessed the file.
For example, it could occur when a background script is still running and you forgot about it. In this case, you either
restart your system or find and kill the process that is locking the database. On Unix based systems, you can try the
following:
-
cd
into your session file directory. -
fuser my_account.session
to find the process id. -
kill 1234
to gracefully stop the process. -
If the last command doesn’t help, use
kill -9 1234
instead.
If you want to run multiple clients on the same account, you must authorize your account (either user or bot)
from the beginning every time, and use different session names for each parallel client you are going to use.
I run FlexGet on a seedbox server to do some automatic downloads. This error constantly occurred on the server.
Below is one of the crash reports with SQLAlchemy debug logging enabled:
1 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 COMMIT
2 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Connection <sqlite3.Connection object at 0x7fe4548293d0> being returned to pool
3 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Connection <sqlite3.Connection object at 0x7fe4548293d0> rollback-on-return
4 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Closing connection <sqlite3.Connection object at 0x7fe4548293d0>
5 2018-09-05 03:00 DEBUG util.simple_persistence task1 Flushing simple persistence for task None to db.
6 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Created new connection <sqlite3.Connection object at 0x7fe454829ce0>
7 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Connection <sqlite3.Connection object at 0x7fe454829ce0> checked out from pool
8 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 BEGIN (implicit)
9 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 UPDATE simple_persistence SET json=? WHERE simple_persistence.feed IS NULL AND simple_persistence.plugin = ? AND simple_persistence."key" = ?
10 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 ('"2018-09-04T00:03:22Z"', u'manager', u'last_cleanup')
11 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 UPDATE simple_persistence SET json=? WHERE simple_persistence.feed IS NULL AND simple_persistence.plugin = ? AND simple_persistence."key" = ?
12 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 ('"2018-09-04T00:03:22Z"', u'db_vacuum', u'last_vacuum')
13 2018-09-05 03:00 INFO sqlalchemy.engine.base.Engine task1 COMMIT
14 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Connection <sqlite3.Connection object at 0x7fe454829ce0> being returned to pool
15 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Connection <sqlite3.Connection object at 0x7fe454829ce0> rollback-on-return
16 2018-09-05 03:00 DEBUG sqlalchemy.pool.NullPool task1 Closing connection <sqlite3.Connection object at 0x7fe454829ce0>
17 2018-09-05 03:00 DEBUG scheduler all tasks in schedule finished executing
18 2018-09-05 03:00 INFO apscheduler.executors.default Job "pt-* (trigger: interval[0:02:00], next run at: 2018-09-05 03:01:54 PDT)" executed successfully
19 2018-09-05 03:01 DEBUG apscheduler.scheduler Looking for jobs to run
20 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Created new connection <sqlite3.Connection object at 0x7fe4548293d0>
21 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Connection <sqlite3.Connection object at 0x7fe4548293d0> checked out from pool
22 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine SELECT apscheduler_jobs.id, apscheduler_jobs.job_state nFROM apscheduler_jobs nWHERE apscheduler_jobs.next_run_time <= ? ORDER BY apscheduler_jobs.next_run_time
23 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine (1536141715.712565,)
24 2018-09-05 03:01 DEBUG sqlalchemy.engine.base.Engine Col ('id', 'job_state')
25 2018-09-05 03:01 DEBUG sqlalchemy.engine.base.Engine Row (u'60924f166baf5ed4812629eefdf5cbe52196b4cb', <read-write buffer ptr 0x7fe4500044e0, size 571 at 0x7fe4500044a0>)
26 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Connection <sqlite3.Connection object at 0x7fe4548293d0> being returned to pool
27 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Connection <sqlite3.Connection object at 0x7fe4548293d0> rollback-on-return
28 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Closing connection <sqlite3.Connection object at 0x7fe4548293d0>
29 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Created new connection <sqlite3.Connection object at 0x7fe4548293d0>
30 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Connection <sqlite3.Connection object at 0x7fe4548293d0> checked out from pool
31 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine UPDATE apscheduler_jobs SET next_run_time=?, job_state=? WHERE apscheduler_jobs.id = ?
32 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine (1536141834.067449, <read-only buffer for 0x7fe4500077f0, size -1, offset 0 at 0x7fe42c5a81f0>, '60924f166baf5ed4812629eefdf5cbe52196b4cb')
33 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine COMMIT
34 2018-09-05 03:01 INFO apscheduler.executors.default Running job "pt-* (trigger: interval[0:02:00], next run at: 2018-09-05 03:03:54 PDT)" (scheduled at 2018-09-05 03:01:54.067449-07:00)
35 2018-09-05 03:01 DEBUG scheduler executing tasks: [u'pt-*']
36 2018-09-05 03:01 DEBUG scheduler task finished executing: task2
37 2018-09-05 03:01 DEBUG manager task2 Not running db cleanup, last run 2018-09-04 00:03:22
38 2018-09-05 03:01 DEBUG sqlalchemy.orm.path_registry task2 set 'memoized_setups' on path 'EntityRegistry((<Mapper at 0x7fe4586b1590; SimpleKeyValue>,))' to '{}'
39 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool task2 Created new connection <sqlite3.Connection object at 0x7fe44fffd118>
40 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool task2 Connection <sqlite3.Connection object at 0x7fe44fffd118> checked out from pool
41 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine task2 BEGIN (implicit)
42 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine task2 SELECT simple_persistence.feed AS simple_persistence_feed, simple_persistence.json AS simple_persistence_json, simple_persistence.id AS simple_persistence_id, simple_persistence.plugin AS simple_persistence_plugin, simple_persistence."key" AS simple_persistence_key, simple_persistence.added AS simple_persistence_added nFROM simple_persistence nWHERE simple_persistence.feed = ?
43 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine task2 ('task2',)
44 2018-09-05 03:02 INFO sqlalchemy.engine.base.Engine task2 ROLLBACK
45 2018-09-05 03:02 DEBUG sqlalchemy.pool.NullPool task2 Connection <sqlite3.Connection object at 0x7fe44fffd118> being returned to pool
46 2018-09-05 03:02 DEBUG sqlalchemy.pool.NullPool task2 Connection <sqlite3.Connection object at 0x7fe44fffd118> rollback-on-return
47 2018-09-05 03:02 DEBUG sqlalchemy.pool.NullPool task2 Closing connection <sqlite3.Connection object at 0x7fe44fffd118>
48 2018-09-05 03:02 CRITICAL task_queue Database error while running a task. Attempting to recover.
49 2018-09-05 03:02 DEBUG scheduler task finished executing: task1
50 Traceback (most recent call last):
51 File "/root/base/flexget/libs/Flexget/flexget/task_queue.py", line 48, in run
52 self.current_task.execute()
53 File "/root/base/flexget/libs/Flexget/flexget/task.py", line 72, in wrapper
54 return func(self, *args, **kw)
55 File "/root/base/flexget/libs/Flexget/flexget/task.py", line 643, in execute
56 fire_event('task.execute.started', self)
57 File "/root/base/flexget/libs/Flexget/flexget/event.py", line 106, in fire_event
58 result = event(*args, **kwargs)
59 File "/root/base/flexget/libs/Flexget/flexget/event.py", line 23, in __call__
60 return self.func(*args, **kwargs)
61 File "/root/base/flexget/libs/Flexget/flexget/utils/simple_persistence.py", line 195, in load_task
62 SimplePersistence.load(task.name)
63 File "/root/base/flexget/libs/Flexget/flexget/utils/simple_persistence.py", line 144, in load
64 for skv in session.query(SimpleKeyValue).filter(SimpleKeyValue.task == task).all():
65 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2737, in all
66 return list(self)
67 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2889, in __iter__
68 return self._execute_and_instances(context)
69 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2912, in _execute_and_instances
70 result = conn.execute(querycontext.statement, self._params)
71 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 948, in execute
72 return meth(self, multiparams, params)
73 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
74 return connection._execute_clauseelement(self, multiparams, params)
75 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
76 compiled_sql, distilled_params
77 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
78 context)
79 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
80 exc_info
81 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
82 reraise(type(exception), exception, tb=exc_tb, cause=cause)
83 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
84 context)
85 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 507, in do_execute
86 cursor.execute(statement, parameters)
87 OperationalError: (sqlite3.OperationalError) database is locked [SQL: u'SELECT simple_persistence.feed AS simple_persistence_feed, simple_persistence.json AS simple_persistence_json, simple_persistence.id AS simple_persistence_id, simple_persistence.plugin AS simple_persistence_plugin, simple_persistence."key" AS simple_persistence_key, simple_persistence.added AS simple_persistence_added nFROM simple_persistence nWHERE simple_persistence.feed = ?'] [parameters: ('task2',)] (Background on this error at: http://sqlalche.me/e/e3q8)
crash_report.2018.09.05.030209839767.log
Possible cause:
This issue is likely caused by that FlexGet and APScheduler use same sqlite3 database file db-config.sqlite
and access it at the same time.
In the above crash report, from line 29 to 33, APScheduler created a connection <sqlite3.Connection object at 0x7fe4548293d0>
to the sqlite3 database file to update next_run_time
. It’s a database write operation, so the sqlite3 database file was locked during the operation.
29 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Created new connection <sqlite3.Connection object at 0x7fe4548293d0>
30 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool Connection <sqlite3.Connection object at 0x7fe4548293d0> checked out from pool
31 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine UPDATE apscheduler_jobs SET next_run_time=?, job_state=? WHERE apscheduler_jobs.id = ?
32 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine (1536141834.067449, <read-only buffer for 0x7fe4500077f0, size -1, offset 0 at 0x7fe42c5a81f0>, '60924f166baf5ed4812629eefdf5cbe52196b4cb')
33 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine COMMIT
From line 39 to 87, It looks like that, before this write operation was fully completed and the connection <sqlite3.Connection object at 0x7fe4548293d0>
was closed properly, FlexGet in another thread created a new database connection <sqlite3.Connection object at 0x7fe44fffd118>
to query simple_persistence
. Since the database file was still locked by the APScheduler’s operation, the “database is locked” error occured.
39 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool task2 Created new connection <sqlite3.Connection object at 0x7fe44fffd118>
40 2018-09-05 03:01 DEBUG sqlalchemy.pool.NullPool task2 Connection <sqlite3.Connection object at 0x7fe44fffd118> checked out from pool
41 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine task2 BEGIN (implicit)
42 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine task2 SELECT simple_persistence.feed AS simple_persistence_feed, simple_persistence.json AS simple_persistence_json, simple_persistence.id AS simple_persistence_id, simple_persistence.plugin AS simple_persistence_plugin, simple_persistence."key" AS simple_persistence_key, simple_persistence.added AS simple_persistence_added nFROM simple_persistence nWHERE simple_persistence.feed = ?
43 2018-09-05 03:01 INFO sqlalchemy.engine.base.Engine task2 ('task2',)
44 2018-09-05 03:02 INFO sqlalchemy.engine.base.Engine task2 ROLLBACK
45 2018-09-05 03:02 DEBUG sqlalchemy.pool.NullPool task2 Connection <sqlite3.Connection object at 0x7fe44fffd118> being returned to pool
46 2018-09-05 03:02 DEBUG sqlalchemy.pool.NullPool task2 Connection <sqlite3.Connection object at 0x7fe44fffd118> rollback-on-return
47 2018-09-05 03:02 DEBUG sqlalchemy.pool.NullPool task2 Closing connection <sqlite3.Connection object at 0x7fe44fffd118>
48 2018-09-05 03:02 CRITICAL task_queue Database error while running a task. Attempting to recover.
49 2018-09-05 03:02 DEBUG scheduler task finished executing: task1
50 Traceback (most recent call last):
51 File "/root/base/flexget/libs/Flexget/flexget/task_queue.py", line 48, in run
52 self.current_task.execute()
53 File "/root/base/flexget/libs/Flexget/flexget/task.py", line 72, in wrapper
54 return func(self, *args, **kw)
55 File "/root/base/flexget/libs/Flexget/flexget/task.py", line 643, in execute
56 fire_event('task.execute.started', self)
57 File "/root/base/flexget/libs/Flexget/flexget/event.py", line 106, in fire_event
58 result = event(*args, **kwargs)
59 File "/root/base/flexget/libs/Flexget/flexget/event.py", line 23, in __call__
60 return self.func(*args, **kwargs)
61 File "/root/base/flexget/libs/Flexget/flexget/utils/simple_persistence.py", line 195, in load_task
62 SimplePersistence.load(task.name)
63 File "/root/base/flexget/libs/Flexget/flexget/utils/simple_persistence.py", line 144, in load
64 for skv in session.query(SimpleKeyValue).filter(SimpleKeyValue.task == task).all():
65 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2737, in all
66 return list(self)
67 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2889, in __iter__
68 return self._execute_and_instances(context)
69 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2912, in _execute_and_instances
70 result = conn.execute(querycontext.statement, self._params)
71 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 948, in execute
72 return meth(self, multiparams, params)
73 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
74 return connection._execute_clauseelement(self, multiparams, params)
75 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
76 compiled_sql, distilled_params
77 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
78 context)
79 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
80 exc_info
81 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
82 reraise(type(exception), exception, tb=exc_tb, cause=cause)
83 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
84 context)
85 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 507, in do_execute
86 cursor.execute(statement, parameters)
87 OperationalError: (sqlite3.OperationalError) database is locked [SQL: u'SELECT simple_persistence.feed AS simple_persistence_feed, simple_persistence.json AS simple_persistence_json, simple_persistence.id AS simple_persistence_id, simple_persistence.plugin AS simple_persistence_plugin, simple_persistence."key" AS simple_persistence_key, simple_persistence.added AS simple_persistence_added nFROM simple_persistence nWHERE simple_persistence.feed = ?'] [parameters: ('task2',)] (Background on this error at: http://sqlalche.me/e/e3q8)
I have briefly looked into some related code of FlexGet and APScheduler. FlexGet shares same sqlalchemy.engine.Engine
instance with APScheduler to access the database across threads, but it appears to not be thread-safe. FlexGet thread and APScheduler thread can create different connections to access the database concurrently.
Normally, these concurrent accesses should be quite rare, which causes this issue happens randomly. However, it has happened on my server quite constantly. It’s probably because I run FlexGet at 2 minutes interval on a slow seedbox server with heavy I/O loads. Database operations can take a while to finish on that server, so this kind of collision is more likely to happen.
Temporary fix:
I have modified FlexGet to use a separate sqlite3 database file db-apscheduler.sqlite
for APScheduler. Since then it has been running for almost three weeks without any issues.
diff --git a/flexget/plugins/daemon/scheduler.py b/flexget/plugins/daemon/scheduler.py index 16fb35652..7892f6a2d 100644 --- a/flexget/plugins/daemon/scheduler.py +++ b/flexget/plugins/daemon/scheduler.py @@ -121,7 +121,18 @@ def setup_scheduler(manager): global scheduler if logging.getLogger().getEffectiveLevel() > logging.DEBUG: logging.getLogger('apscheduler').setLevel(logging.WARNING) - jobstores = {'default': SQLAlchemyJobStore(engine=manager.engine, metadata=Base.metadata)} + # jobstores = {'default': SQLAlchemyJobStore(engine=manager.engine, metadata=Base.metadata)} + + + # Temporary fix for "OperationalError: (sqlite3.OperationalError) database is locked" error: + # Use a separate sqlite3 database file 'db-apscheduler.sqlite' for APScheduler + import os + db_filename = os.path.join(manager.config_base, 'db-apscheduler.sqlite') + db_filename = db_filename.replace('\', '\\') + database_uri = 'sqlite:///%s' % db_filename + jobstores = {'default': SQLAlchemyJobStore(url=database_uri)} + + # If job was meant to run within last day while daemon was shutdown, run it once when continuing job_defaults = {'coalesce': True, 'misfire_grace_time': 60 * 60 * 24} try:
temp_fix_database_is_locked.patch.txt
Additional information:
- FlexGet version: 2.14.20.dev
- Python version: 2.7.12
- Installation method: pip
- Using daemon (yes/no): yes
- OS and version: Ubuntu Server 16.04