Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
ELK seems nice and all, but also resource heavy, and if I understood everything correctly it also has a few gaping holes if not configured properly.
billgates2086569d@ScriptCoded right now it runs on it's own VM, so if the resources aren't used that be a waste.
Not sure about filestash but I haven't seen anything odd.
Maybe we don't get enough data. It takes a long time to set up because have to build everything from scratch... Rather than just bring onboarded so lots of trial and error and figuring stuff out...
But the thing is just feeling like when are you all going to stop living in 90s, 00s... Get up to date already...
magicMirror656069dBe carefull with the F-ing ES indexs. ES wants to index All Things, and thats whats usualy causes it to be resource heavy. use an index whitelist (only few specific fields), and production resource use will go down significantly.
You can later move the data to a dedicated ES instance, and index whatever there.
billgates2086569d@magicMirror index whitelist? I tried data mapping which I think helped, it stopped indexing all the different query parameters
My setup is logs are on prod server, they get shipped in real time to ELK instance. LS processes and loads them into the indexes.
Most of the CPU I think go to kibana/ES queries I think. Seems to do searches and graphs pretty fast...
magicMirror656069d@billgates The issue with ES I'm referring to is the "cost" of adding data to the es store. If your logs are actually structured in a "good", and devs don't log dumb stuff, then no problems. But if the devs add logs with 20+ fields with random key/values - es will slow down significantly after a while, as it updates 100+ indexs per data point.
Mapping should take care of the problem nicely. Reason most ELK deploy fail, is no Mapping - or as I call it "index whitelisting".