Page 1 of 1

LockObtainFailedException

PostPosted:Wed Dec 11, 2019 11:35 am
by openkm_user
Hello,

We are currently checking for a condition in our database to set permission in DMS through REST API. For example, if someone is owner of an account (unique no) in our database, we will give read + write permission to folder (same account no is folder name) to that user in OpenKM through API.

However, since there are millions of records in DMS, it returns the following error continuously,
Code: Select all
2019-12-11 03:14:35,499 [Hibernate Search: Directory writer-1] ERROR o.h.s.exception.impl.LogErrorHandler - Exception occurred org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@C:\path\repository\index\com.openkm.dao.bean.NodeBase\write.lock
Primary Failure:
	Entity com.openkm.dao.bean.NodeDocument  Id b852eda7-43f4-486c-bb47-44b577118120  Work Type  org.hibernate.search.backend.DeleteLuceneWork
Subsequent failures:
	Entity com.openkm.dao.bean.NodeDocument  Id b852eda7-43f4-486c-bb47-44b577118120  Work Type  org.hibernate.search.backend.AddLuceneWork

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@C:\path\repository\index\com.openkm.dao.bean.NodeBase\write.lock
	at org.apache.lucene.store.Lock.obtain(Lock.java:84) ~[lucene-core-3.1.0.jar:3.1.0 1085809 - 2011-03-26 17:59:57]
	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1097) ~[lucene-core-3.1.0.jar:3.1.0 1085809 - 2011-03-26 17:59:57]
	at org.hibernate.search.backend.Workspace.createNewIndexWriter(Workspace.java:202) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:180) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:103) [hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_231]
	at java.lang.Thread.run(Unknown Source) [na:1.8.0_231]
2019-12-11 03:14:35,499 [Hibernate Search: Directory writer-1] ERROR o.h.s.b.i.lucene.PerDPQueueProcessor - Unexpected error in Lucene Backend: 
org.hibernate.search.SearchException: Unable to remove class com.openkm.dao.bean.NodeDocument#b852eda7-43f4-486c-bb47-44b577118120 from index.
	at org.hibernate.search.backend.impl.lucene.works.DeleteWorkDelegate.performWork(DeleteWorkDelegate.java:91) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_231]
	at java.lang.Thread.run(Unknown Source) [na:1.8.0_231]
Caused by: java.lang.NullPointerException: null
	at org.hibernate.search.backend.impl.lucene.works.DeleteWorkDelegate.performWork(DeleteWorkDelegate.java:87) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	... 6 common frames omitted
2019-12-11 03:14:35,499 [Hibernate Search: Directory writer-1] ERROR o.h.s.exception.impl.LogErrorHandler - Exception occurred org.hibernate.search.SearchException: Unable to remove class com.openkm.dao.bean.NodeDocument#b852eda7-43f4-486c-bb47-44b577118120 from index.
Primary Failure:
	Entity com.openkm.dao.bean.NodeDocument  Id b852eda7-43f4-486c-bb47-44b577118120  Work Type  org.hibernate.search.backend.DeleteLuceneWork
Subsequent failures:
	Entity com.openkm.dao.bean.NodeDocument  Id b852eda7-43f4-486c-bb47-44b577118120  Work Type  org.hibernate.search.backend.AddLuceneWork
	Entity com.openkm.dao.bean.NodeDocument  Id b852eda7-43f4-486c-bb47-44b577118120  Work Type  org.hibernate.search.backend.AddLuceneWork

org.hibernate.search.SearchException: Unable to remove class com.openkm.dao.bean.NodeDocument#b852eda7-43f4-486c-bb47-44b577118120 from index.
	at org.hibernate.search.backend.impl.lucene.works.DeleteWorkDelegate.performWork(DeleteWorkDelegate.java:91) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_231]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_231]
	at java.lang.Thread.run(Unknown Source) [na:1.8.0_231]
Caused by: java.lang.NullPointerException: null
	at org.hibernate.search.backend.impl.lucene.works.DeleteWorkDelegate.performWork(DeleteWorkDelegate.java:87) ~[hibernate-search-3.4.2.Final.jar:3.4.2.Final]
	... 6 common frames omitted
I cleared the okm_activity and okm_dashboard_activity tables which was hogging lot of space and no use for us.

Please advice!

Re: LockObtainFailedException

PostPosted:Fri Dec 13, 2019 4:10 pm
by jllort
The error is caused by lucene, when the application have power off and does not corrently shutdown the lucene search ingene maybe damaged or not able to create the file write.lock ( because still exist ).
Code: Select all
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@C:\path\repository\index\com.openkm.dao.bean.NodeBase\write.lock
Try 1:
* Stop openkm
* delete the write.lock file
* Start openkm

Try 2 - If error persist
* Stop openkm
* delete index folder ( only the index folder )
* Start openkm
* Goto administration > tools > rebuild index -> choose "lucene index" option ( the lucene will be rebuild, may take minutes or hours depending your repository size , usually minutes ).

Re: LockObtainFailedException

PostPosted:Sat Dec 14, 2019 7:40 am
by openkm_user
Thanks for the response, it will take couple of days to rebuild indexes for our repository size.