42
linuxxx
6y

Alright lets work on the security/privacy blog again.

Things I've got in the making right now: dark theme by default, font change and an rss feed!

Let me know what you'd like to see :)

I'll also reveal a new domain name soon!

Comments
  • 13
    RSS feed is in my opinion the most important thing :)
  • 3
    Is the source code in a public git?
  • 3
    @plusgut Any tips/ideas on that one? I've got some in my mind but I've never worked with those things :)
  • 9
    @xenira not yet for a reason, I really have to refactor first, it's quite a mess right now but it works 😅
  • 4
    @linuxxx rss or atom feeds are pretty basic, just look at any rss feed, for example: https://news.ycombinator.com/rss I think theres also a ton of libraries that take in an array and output it all perfectly ready for you
  • 3
    @JoshBentThanks! I was more thinking as in like, how many items per 'page' etc :)
  • 2
    @linuxxx how many items is a really good question. Since the rss standard didn't specify pagination originally (don't now if it's officially in there now). And because of that most rss readers don't implement such a thing.
    For the beginning I would recommend to put all items in the rss feed, and maybe later add pagination to it. See if there are to many performance implications happening.
  • 1
    @linuxxx Having looked around, it seems to be mostly 9/10 recent items
  • 2
    @JoshBent @plusgut My idea is to not only RSS my new blog posts but also articles from other websites which interest me so that users have a more steady stream of articles to read and I can expand the blog faster/better :)
  • 2
    @plusgut wouldn't that rather turn into a content api then? since I never actually saw any pagination in rss feeds so far, since most (all?) readers just scrape those subbed rss feeds in some intervals to get newer items - and then have their own pagination, that fetches the items from their own database.
  • 4
    @linuxxx what I saw some pages doing, that do have external content, is to seperate that and have 2 rss feeds, one is your blogs only and the other one your blogs+external content, might be a cool idea to consider, since maybe some wouldn't want external content mostly dominating your blog posts or something
  • 4
    Hmm good one yeah, I just created one table in the current db (people, you've really got to try harder than trying to do SQL injection attacks on the blog, IT DOESN'T EVEN USE SQL itself) but this is a good one, thanks!
  • 3
    @JoshBent that rss readers are mostly just scraping the feed is totally true for most readers. But not all of them.
    What a reader should do (as RFC 5005 describes https://tools.ietf.org/html/...)
    is: get the feed, and when you initially scrape it, get all the *next* pages as well. And when you come the next time, after the initial scraping, you scrape until the first item you already indexed. If it's not on the first page, you would need to continue to the *next* pages, until you hit the first item you already know.

    But as I said before, sadly that's not whats happening for most rss-readers.
  • 2
    @plusgut What would this mean for me? As in, what would I have to do?
  • 2
    @linuxxx at your first implementation I would completely ignore the pagination. Work iteratively and show all items you have at the rss.xml without any pagination.
    And if you notice performance issues, you can work on that later.
  • 2
    @plusgut Just wondering how I'd call the rss.xml without the .php extension...
  • 5
    @linuxxx either you generate the .xml file whenever you publish new content or just rewrite that in nginx basically
  • 3
    @JoshBent Currently looking into nginx rewrites indeed :)
  • 1
    @plusgut nice, thanks for that, didn't know rss readers should actually fetch the entire range
  • 2
    @linuxxx though having it just be generated or cached atleast, can be a lot easier on your resources, since the php script would always re-generate that specific document, with each request, which also could be a problem if you dont limit it, if you have it reading from mysql - if you have it reading from redis not so much. (though still re-generating per request is not needed)
  • 2
    @JoshBent For now just reading from mysql :)
  • 2
    @linuxxx that could be a problem, since people could abuse that to bring down your mysql database iirc.
  • 3
    @JoshBent I've got an idea on that one, let me elaborate soon :)

    @dfox @trogus do you have any method to resize this? And if yes, what numbers would I have to change to adjust the width/height? (https://avatars.devrant.com/v-18_c-...)
  • 4
    @linuxxx what size are you looking for specifically? (Bigger/smaller). You can see different crops by changing the “c” parameter in the avatar url.
  • 2
    @linuxxx told you :D
  • 2
    @dfox @plusgut Yes, I figured that one out thanks to plusgut, disregard that asking comment :P
  • 3
    @linuxxx can I have a link to the feed as soon as you get it up?
  • 3
  • 1
    What about content? Which topics do you want to cover?
  • 2
    @JonnyDoe It's an online/cyber security and online privacy blog so anything that fits in those categories.

    Can be software reviews, recent event reviewings and so on :)
  • 3
    @JoshBent Right now I'm thinking of generating the feed xml's every time again (with the 10 last items) when inserting a new item. The generating probably takes less than a second and everything will be served staticly :)
  • 1
    @JonnyDoe Good idea you think or nahhh?
  • 1
    @linuxxx so basically you're going with my suggestion above, nice 😊
  • 2
    @JoshBent Guess so! ;)
  • 2
    @linuxxx do you recommend Contabo? Their plans look to good to be true
  • 3
    @Jifuna Been using them for a while now and it actually works great...
  • 3
    @Jifuna The blog and soon the privacy site will both run on one of their vps's :) (the blog already does)
  • 3
    @linuxxx okay, thanks! I'll try them out soon. Looking forward to your next blogpost
  • 2
    @Jifuna Will be working on that tonight, it'll be a privacy one ;)
  • 2
Add Comment