Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
We almost lost a client once because I said "SSL" instead of "TLS" during the security presentation. I'm glad I'm not the only one still using that term.
-
@juneeighteen actually during my college days we read more about SSL than TLS. Everyone used to use SSL V3 back then, though TLS V1.0 was released. Now in practical scenario I use TLS V1.2, but the term SSL still hits my mind whenever I think about this SSL/TLS thing.
-
donuts236727ywow... You got a pretty good setup and know your stuff...
We have 1 db and 4 servers behind a load balancing... We want to scale the servers but I keep telling them the need to replicate or cache the db.... but i guess we'll see once the site hits a wall... Again... What t they want to do and how much code needs to be rewritten.... -
@billgates currently we are in development phase and we have one application server which runs the http and the socket server, one db server and one cache server. The cache and the db server can be expanded easily, as we are using Mongo as the db and Redis as the cache. The problem is with expanding the application servers. The socket server and the http server needed to be connected somehow as they will share same data (device information, user details, sessions, access tokens etc.). So we used Redis' pub-sub module so that we can get a trigger whenever any of the application server updates the cache.
-
@gitpull 🤣🤣 giving presentation was not in our scope, knowledge transfer was.
-
juunas1637y@shubhadeepb Yeah the term SSL is now a bit synonymous with a secure transport. Some frameworks still have things like UseSSL = true, though it actually enables TLS.
-
donuts236727y@shubhadeepb we use mongo too but not sharded. Without a cache though to scale performance we would need to distributed db... And i remember changing from single db to sharded cluster is a lot of code change....
Caching is the other solution but management thinks sharding is the easiest answer....
So they want to scale but they are just kicking more and more rework into the future... Without realizing it.... -
@billgates It's a real pain in ass when we need to rework a working code. I remember the same happened to us when a client told us to implement multi server architecture whereas the original requirement was for single server.
In this project for the very first day I pushed the multi server implementation cause I knew that they will need it at some point. This is an IOT system and showing the live data is very important. That's why we needed the cache server. We only use the database for storing the data and fetch all required data to store in the cache server at the startup of the server applications. -
You forgot the part where they blame you for them not understanding.
Sorry IT Head, should've studied that documentation that someone worked so hard to make.
Related Rants
The IT head of my Client's company : You need to explain me what exactly you are doing in the backend and how the IOT devices are connected to the server. And the security protocol too.
Me : But it's already there in the design documents.
IT Head : I know, but I need more details as I need to give a presentation.
Me : (That's the point! You want me to be your teacher!) Okay. I will try.
IT Head : You have to.
Me : (Fuck you) Well, there are four separate servers - cache, db, socket and web. Each of the servers can be configured in a distributed way. You can put some load balancers and connect multiple servers of the same type to a particular load balancer. The database and cache servers need to replicated. The socket and http servers will subscribe to the cache server's updates. The IOT devices will be connected to the socket server via SSL and will publish the updates to a particular topic. The socket server will update the cache server and the http servers which are subscribed to that channel will receive the update notification. Then http server will forward the data to the web portals via web socket. The websockets will also work on SSL to provide security. The cache server also updates the database after a fixed interval.
This is how it works.
IT Head : Can you please give the presentation?
Me : (Fuck you asshole! Now die thinking about this architecture) Nope. I am really busy.
rant
aws
architecture
fuck client
client