10

Am I crazy ?

Right now we have an API which returns a full planning for a week for 300 employees with indicators (Like "late", "may be postponed" etc) in 4 seconds.

I have a pressure, people telling me it's not fast enough.

I honestly think it is fast.

In order of data it'a around 100 MB of JSON. AND you can do actions on the whole set if needed.

Long story short, I think 4 seconds to get all that data is pretty great. Customers think they should have it instantly.

(Never mind the whole filtering system at thier disposal, they literall only lod the full set and then MANUALLY scroll (Yes there is a quick search box)).

What can I do more ????? cache that ? I can. But they also expect that any changed value is reflected.

And we fucking do it. While you are on the page there is a SignalR conenxion created and notified when any of data is changed and updates it on front. Takes around 500 ms.

Apprently "too slow".

I honestly don't see what we can do more with our small 4 dev team.

Give me 56 developpers I can do something, but right now I'm proud of result.

Comments
  • 6
    Not crazy.
    What about pagination? Load the first n result, then the next n... They won't notice unless they scroll to the bottom before the next lot arrives (not likely).
    You could even start loading asynchronously n results and the rest, and the 4 seconds they are entertained with the first results, the rest of the data is being loaded.
    Just a brain fart, I'm sure you have thought something like this, but still, just in case...
  • 1
    @c3r38r170 Was thinking the same thing. Generate and send data only as needed.
    If there's accumulation that has to be done across the whole result set, at least show the first batch of data as soon as possible while the rest of it is being calculated.
  • 1
    @c3r38r170 lazy load ! Lazy load !
  • 1
    So much this ^^

    @YouAllSuck you ok?

    @Cyanide somethings not quite right these days 🤔
  • 0
    @C0D4 you think ? Here’s a hint
    Everything is happening the same in many locations and were likely on a private wan posing for the internet
  • 0
    Perhaps enabling http2 could increase speed. At least that what light house from dev tools says. Apparently it can change transferred data to be binary compressed in the process.
  • 0
    Yes, caching could help....
    It supports operations manually using set/get from map like object.
    You can activate set to cache, always basically pre outputing in advance, when your data changed its state. And always extracting only cached data in api.

    It should eliminate a bit time for backend operations if your situation allows.
  • 0
    @YouAllSuck Like Dillbert (you have to get to the end) or like Discord (it loads more when you approach the end)?
  • 1
    CQRS would certainly help (although it adds a lot of complexity). It is basically a super eager cache: as soon as some data changes, you update the cache. The cache being a second (document) DB that contains the ready-to-view JSON for the FE; no additional processing required.

    There is a sync delay between the write and read DB but other than that it's super fast.
  • 0
    1s delay between change and update is normal, tell them you can't do better without making the whole thing way more expensive. The expected delay for a server push is 2s.
  • 2
    Just cache it and implements events to update it ?
  • 1
    I think the key here is communication.

    Don't try to optimize things "randomly" to gain a "possible" effect - especially if you cannot give an estimate how much the optimization actually helps.

    Any optimization has side effects - at least that the code becomes more complex and error prone ;)

    If it returns a planning for a week with 300 employees...

    Let's say 7 days are work days. And let's estimate that the status is an id and that the employee is an id and that the work day is an integer (0...6)

    7 * 300 * 3 = 6300 fields

    That's just an wild estimate, yes - but point is: You cannot optimize much loading a result set of **this** size.

    HTTP 2 would be actually a worse choice, as everything gets pushed down one connection. Large result sets are not so nice on HTTP 2.

    So communicate.

    How can you reduce the size?

    The easiest way would be to manipulate the number of employees. As it should be the biggest factor.

    Explain to them that the problem is not performance itself, the problem is the quantity of the dataset.
  • 0
    Get some stats on where the time is being taken and make changes based on what those stats tell you.
    It’s easy to assume that your problem is down to a slow query but it might not be. We recently had some issues that turned out to be due to autofac when we assumed the query was the issue.

    There are some good suggestions here.
    I would add that you could enable compression and if you’re using json, you can tweak the serialiser so that default values are excluded from the serialisation result. Depending on your dataset that might make a big difference to the size of the payload.
  • 0
    @darkwind As I understood it, that compression mainly goes for request headers, benefitting back-and-forth communication more than single requests with long bodys. And browsers enable HTTP/2 automatically as soon as both sides support it and are using HTTPS
Add Comment