Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "protobuf"
-
My business partner who claim to be the best Wall Street programmer, probably 160 years ago, decided to improve our core system written in Go.
He dragged a *shortcut* of *whole* local github folder into Vendor(sort of node_modules), Manually changed commit hash in Gopkg.lock, etc,
Next morning I woke up to 24 failed builds on Master; all protobuf redone with his unknown gogo version, database trigger function with changed logics added parameters, and a text message “has anyone experienced build corruption? Works on my mac”
My other business partners said “it’s okay, He’s going through tough divorce needs some distraction”
F M L1 -
ProfaneDB is a database to store Protobuf objects, working on top of gRPC for cross compatibility.
May evolve anywhere towards a "Serverless" kind of solution; a GraphQL to Protobuf interface; may use SQL as backend...2 -
I feel so useless when code doesn't work do to an external library out of my control
(Protobuf support for python 3.7 sucks)5 -
The project I have been working on was growing and growing and growing... It reached it a point where the front-end was really hard to maintain. The worst part was the communication protocol, we were using JSON to serialize really complex objects.
I took some initiative and suggested that we use protobuf instead of JSON. Long story short, data usage is 10% of what it used to be, serialization and deserialzation is much faster, and the best of all, everything is strongly typed, with auto generated classes. Fucking awesome!1 -
Design Decision:
We have an API and a lot of microservices based on that API. Additionally we have a store of protobuf-templates (files to automate serializing certain events etc).
Currently for each service we have the API with general stuff (connection stuff etc) and then copy the 5 or 6 proto-files we need for that service, they update sometimes, so does the API, for each service, two things that need to stay updated. Which option would seem more logical to you?
a) Integrate all proto files into the API. The services then only need to update the API but they also have access to many proto files they don't need for that service (which are required for other services however)
or
b) Keep them seperated and keep manually updating the proto-files for affected services
Disclaimer: our proto files are always backwards compatible by design, both the API and protofiles change fairly frequently.
Ty -
My team is currently using a confluent cloud kafka architecture via protobuf and wordpress as website.
We realized this is quite impractical when you want login functionality and such and would like to make a page from scratch now.
problem is we are all java backend developers with minor php knowledge, and php doesnt work well with protobuf or kafka.
how would you get such a website started. what language and frameworks are easy to get going? -
Does anybody know if there's a tool for parsing protobuf using live Network capture? I basically want to be able to pass profiles into something like Wireshark and get a live request response cycle1
-
Why in god's name does protobuf treat all enum values in the same scope??
Who would have thought that multiple enums may have values like "Undefined" or "Automatic"?
It's current year and somehow C style enums still haunt me1 -
since we have go mod I believe that if you are doing microservices or fracturing a monolith app into microservices for that matter, you rather have a monorepo. I'm finding really useful to do `import ("backend/monolith" "backend/notify")` and such than sharing protobuf files by copying with some Makefile target.