caching - Neo4j GC overhead limit exceeded -


i running test , got "gc overhead limit exceeded" error. understood because loaded many primitives in cache. wrong?

my question then, how can prevent ourselves this? example, can evaluate size of needed memory based on number of primitives? there tip approximatively know it?

my boss want know how many primitives can manage @ same time. assume related jvm settings can't manage find settings.

sorry if dumb questions, i'm not used jvm settings , peformance , have pretty huge lack of knowledge atm. trying , willing understand though!

jimmy.

understanding details of java garbage collections far not trivial thing. since you're question rather unspecific, can provide rather unspecific answer well. there's section in neo4j reference manual on jvm settings, http://docs.neo4j.org/chunked/stable/configuration-jvm.html.

another idea depending on graph , heap size change implementation type object cache. there soft (default in community edition), weak , strong cache implementation. additionally enterprise edition comes hpc (high performance cache) implementation reduces number of full garbage collections dynamically adjusting cache size. more read http://docs.neo4j.org/chunked/stable/configuration-caches.html#_object_cache.


Comments

Popular posts from this blog

python - Subclassed QStyledItemDelegate ignores Stylesheet -

java - HttpClient 3.1 Connection pooling vs HttpClient 4.3.2 -

SQL: Divide the sum of values in one table with the count of rows in another -