When trying to do pretty much any Toolkit related operation, the following error appears:
ERROR: A general error was reported: database is locked
Traceback (most recent call last):
File "/Volumes/Server02_D/tank/install/core/scripts/tank_cmd.py", line 1068, in <module>
File "/Volumes/Server02_D/tank/install/core/scripts/tank_cmd.py", line 842, in run_engine_cmd
ctx = tk.context_from_path(ctx_path)
File "/Volumes/Server02_D/tank/install/core/python/tank/api.py", line 413, in context_from_path
return context.from_path(self, path, previous_context)
File "/Volumes/Server02_D/tank/install/core/python/tank/context.py", line 706, in from_path
path_cache = PathCache(tk.pipeline_configuration)
File "/Volumes/Server02_D/tank/install/core/python/tank/path_cache.py", line 37, in __init__
File "/Volumes/Server02_D/tank/install/core/python/tank/path_cache.py", line 78, in _init_db
OperationalError: database is locked
Solutions and things to check
This can be caused by a number of different things:
Ensure that the path cache file and the cache directory that the path cache file resides in is writable by everyone! You can find the path cache either located in a tank folder inside your project storage, or inside the cache folder of your pipeline configuration.
Check for crashed processes
A crashed or zombied process may be keeping a connection to the database, effectively locking it. This can be checked on linux/macosx by running a
lsof | grep path_cache command which will display any process which is accessing the path cache file. Also, check if there are any lock files or journal files residing next to the path cache file for the current project. This is also an indication of a crashed or zombied process. Sqlite tries to auto recover as much as possible, but as long as there is a handle open to the database, this may cause the database to lock.
Check your cifs mount parameters
If you are mounting a CIFS storage, It is possible that the sqlite locking will not work if your network storage is mounted without the nolock parameter. In this case, try adding a noclock parameter to your fstab:
//myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0
From the fstab manual:
lock/nolock Selects whether to use the NLM sideband protocol to lock files on the server. If neither option is specified (or if lock is specified), NLM locking is used for this mount point. When using the nolock option, applications can lock files, but such locks provide exclusion only against other applications running on the same client. Remote applications are not affected by these locks.
Still getting the error?
If you are still having issues after having tried the suggestions above, drop us a line on firstname.lastname@example.org.
A Deeper Explanation
The database that the error message refers to is a sqlite database file. That file is located at:
<project_root>/tank/cache/path_cache.db. The path cache file stores the paths on disk that are computed when the folder creation in Toolkit runs, and it gives us a way to do a reverse lookup, so that we can determine what Shotgun entities go into a given path. Currently there are known issues regarding the performance of sqlite databases running on NFS mounts. If it's stored on an NFS mount at your studio, that could also be part of the issue. The plan going forward is to actually store this data in Shotgun itself, and then maintain the sqlite file as cache of that Shotgun data that is kept in sync by the system.
Having the data in Shotgun means that the cache file can reside on local machines (or even in temp space) and we would not need to keep it on a shared storage. These are changes that we am working on right now and we are hoping to release them very soon in v0.15 of the Toolkit core.