ElasticSearch requires free disk space available and, to prevent disk flooding with index data, implements a safety mechanism (see
This safety will see ElasticSearch automatically:
- non DCE: make all indices read-only when used disk reaches 95%
- DCE: make all (or some? -
not tested-) indices read-only when one or more node reaches 95% used disk
ES will show warnings in logs as soon as disk usage reaches 85% and 90% usage.
Errors will start to occur in web and compute engine when indices go read-only at 95% usage and above.
If user frees some disk, indices *will not* be made read write automatically:
- non DCE: indices will be made read-write by restarting SQ
- DCE: indices will be made read-write by restarting *ALL* the app nodes (the first started APP node after all have been taken down will make indices read-write)
SonarQube comes with a built-in resilience mechanism (is it documented already?) which, in principle, will allow SQ to eventually recover from the indices being behind data in DB (no guarantee how long it would take).
If inconsistencies persist, the only option is to (operation can be very long):
- non DCE: stop SQ, delete data/es6 directory and start SQ. Indices will be rebuilt
- DCE: stop the whole cluster (ES and App nodes, follow documented procedure), delete data/es6 directory on each ES node, start the whole cluster (follow documented procedure)