21

So I managed to correctly project a bounding box in an image through like 3 coordinate systems until I transformed it to an angular aperture in the base frame. So now I can detect people in the rgb camera and then block the lidar readings corresponding to the people when I run my localization algo :)

Comments
  • 1
    Can you kindly point me to some reading material about camera reconstruction/motion reconstruction? :^)
  • 4
    @monzrmango So the bible is Zisserman's book "Multiple view geometry in computer vision". But Prof. Daniel Cremers has very nice lectures on YouTube covering classic reconstruction methods.
  • 0
    So what does the graph represent ?
  • 0
    @killames as far as I understand, the green area is where they stand and where the lidar is then blocked. You can see the guy's legs a bit closer to origo than Nicky's.

    Is that rgb camera view just a section of a 360° camera or how does that work?
  • 1
    that sounds very complicated and math heavy, what does that have to do with your and your colleagues knees though
  • 1
    @LotsOfCaffeine His palms are sweaty, knees weak, arms are heavy
    There's vomit on his sweater already, mom's spaghetti
    He's nervous, but on the surface he looks calm and ready
  • 1
    @PonySlaystation music the moment you never ever let it go you only get one shot lol
  • 1
  • 1
    @PonySlaystation Awwww I love random strangers on the internet too hehe
  • 0
  • 1
    @PonySlaystation .... I can see where a misinterpretation would be made on here and well everywhere but unless you’re pale dark haired 20 Shave and are accompanied by your girlfriend who is buxom and around the same age plus maybe a decade don’t worry lol

    Unless I’m really really desperate lol
  • 0
    @killames "Deep down, aren't we all kinda desperate?" 🤣
  • 1
    @PonySlaystation that or opportunistic within lower standards than initially expressed lol
  • 1
    @killames @electrineer The green area is the camera's HFoV. For now it's only 90 degrees, but in the final setup we will have a 360 coverage. The red dots are lidar scans that will be discarded in the localization algorithm because they represent dynamic objects (people in this case), so there is no point matching them against the map because our map holds only static objects. Inside the green area, there are two sets of rays, each corresponding to the bounding box of every person that is detected :)
  • 2
    @LotsOfCaffeine The camera is mounted low because our platform is small and shaky, so placing it on a higher pole would mean it sways from side to side like a drunk pirate when the robot moves. On the target platform we will have it placed higher, but for now we try to do people detection based only on the lower part
Add Comment