Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
ELK seems nice and all, but also resource heavy, and if I understood everything correctly it also has a few gaping holes if not configured properly.
-
donuts236725y@ScriptCoded right now it runs on it's own VM, so if the resources aren't used that be a waste.
Not sure about filestash but I haven't seen anything odd.
Maybe we don't get enough data. It takes a long time to set up because have to build everything from scratch... Rather than just bring onboarded so lots of trial and error and figuring stuff out...
But the thing is just feeling like when are you all going to stop living in 90s, 00s... Get up to date already... -
@billgates No doubt that ELK is great when set up, seem nice, but I doubt my own skills then it comes to setup
-
Be carefull with the F-ing ES indexs. ES wants to index All Things, and thats whats usualy causes it to be resource heavy. use an index whitelist (only few specific fields), and production resource use will go down significantly.
You can later move the data to a dedicated ES instance, and index whatever there. -
donuts236725y@magicMirror index whitelist? I tried data mapping which I think helped, it stopped indexing all the different query parameters
My setup is logs are on prod server, they get shipped in real time to ELK instance. LS processes and loads them into the indexes.
Most of the CPU I think go to kibana/ES queries I think. Seems to do searches and graphs pretty fast... -
@billgates The issue with ES I'm referring to is the "cost" of adding data to the es store. If your logs are actually structured in a "good", and devs don't log dumb stuff, then no problems. But if the devs add logs with 20+ fields with random key/values - es will slow down significantly after a while, as it updates 100+ indexs per data point.
Mapping should take care of the problem nicely. Reason most ELK deploy fail, is no Mapping - or as I call it "index whitelisting".
Spent another half day learning ELK and how to programatically query and run aggregations against the data that's now collected.
So I can feed it into a testing framework for releases.
I sorta feel like I'm dragging everyone else into the light...
Like "you see what you've been missing all these years? This is how it's supposed to be these days..."
Data data data... Useful data.... This is what you can do when have structured and searchable logs rather then huge messy text files ...
rant