54
dfox
8y

Hey everyone - apologies for the downtime earlier today. Our host is having a lot of issues and we're working to keep everything up through it.

On that note- there might be a little more down time tonight as they are trying to fix something and we might need a few server restarts. I will keep everyone updated and thanks for bearing with us!

Comments
  • 6
    [SUGGESTION] Make a status.devrant.io or something (like Twitter's status.twitterstat.us) so we can know when the app is really down
  • 4
    @thassiov good idea
  • 0
    status.slack.com
  • 1
    What infrastructure do you actually use for devrant?
  • 3
    @sashikumar we have two web servers right now (one to serve primary traffic and one as a redundant backup in a different data center). We have 3 database servers, but right now the master serves all the traffic and right now the 2 slaves are used for redundancy and backups. The web server switchover works really nicely (using AWS Route 53), but we've had issues with the database cutover making it hard to fail over automatically.

    So we're trying to iron out those issues so we have no single points of failure.

    Interestingly though, the hardware issue our host had today was bad in that it would not trigger any of our failovers automatically due to its nature. So that we're going to address with more monitoring.
  • 1
    @dfox This is cool! Thanks! I have an app which serves like 20000 visitors a day. And I'm using 2 gigs RAM instance.. But I see some regular cpu spikes and that causes downtime. Any help from you would be appreciated! :)
  • 0
    @sashikumar pretty cool, sounds like a popular app! As for the spike, are you using Nginx or Apache? Do you know if the issues are caused by the web server or other services that might be running on the instance?
  • 0
    What language and framework did you use to build devRant's private API and db? Play Scala/Java? Postgres? Docker? 😁😁
  • 1
    Hey @dfox, here's your AMA 😆
  • 0
    @vinerz good stuff, I agree. We actually serve very little static content (images get sent to s3/cloudfront) so I don't think there's much we can do there.

    One thing though, in terms of caching, we could add layers that show stale content if the dbs go down, but there's definitely problems with that.

    So far, none of our downtime has been a result of traffic. It's all been reliability issues with our main host and reliability issues with database clustering and the database not failing over properly.

    @nizamani we use PHP 7 and the Slim 3 framework which I really like.
  • 1
    @dfox have you guys considered using aws elasticbeanstalk and rds? Beanstalk is a nice deployment, autoscale, and semi managed platform that has great support for php. And rds has a great easy to use multi read slave with auto fail over. Maybe you guys are already using these....just thought ide mention them.
  • 1
    @dfox Just saw another post saying u guys use neo4j. So scratch the rds comment lol.
  • 0
    @benc we're not hosted on AWS so I don't think that would work. Do you have to be deployed on EC2? We do use Route 53 though (and it's been great) for DNS-based failover of the web servers.

    I'm not that familiar with RDS, but it looks pretty cool. One issue for us though would probably be that we're pretty dependent on the graph structure of our DB.
  • 1
    @dfox oh, i thought you guys were since u used s3/cloudfront and route53. Yeh beanstalk deploys to ec2 with an elastic load balancer in front. It has auto scaling as well. Its a pretty nice tool, i used it extensively to deploy api servers in php5.x using slim 2.x before i switched to node and aws lambda. Gald to hear you're on php7 and slim3 good stuff. Keep up the great work, love the app!
  • 0
    @vinerz I've heard of Graphene and it looks pretty cool but I haven't tried it. We manage Neo4j ourselves but I'll check it out some more. Though at first glance it looks like both the services are super, super expensive.

    @benc awesome :) it's nice to hear from someone else who is a Slim fan!
  • 0
    nice, a dev ranting on devRant.. cos the devRant platform was not developed Ranting.
Add Comment