Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "robotics"
-
Training the beast!
Hopefully I will submit my paper soon, and then I can share a video of the beast in action :)41 -
I am working on my passion project, on my own vacation days because my advisor did not approve it and I can't take no for an answer.
But I had 3 amazing days working with my friend and research partner, full of stupid bugs, moody hardware and a lot of nutella-covered food.
I think I am going to document some of the progress on Twitter, because it supports uploading videos. If anyone is interested in failing robots, I can share the link/handle :)rant no is a word i don't recognize chaos vacation what vacation robotics phd life drone twitter debug research10 -
We got the report made by the EU committee that is assigned to evaluate our project (robotics in the service of healthcare).
I was full on trashing the reviewers for writing some seriously dumb shit, and low-key dissing my professor. Until I got to the part where they addressed the work package I was responsible for. They referred to my work as impressive and innovative, and I was like, well, maybe they're not that bad 😂10 -
So my localization algorithm actually runs onboard my YouBot :)
My paper was basically torn apart by my professor, so I had to write some new classes and redo the whole experimental section. And all the other sections too. I resubmitted it to him after revisions, and the second iteration was way better - I'm really close to final paper level :)
I told my professor and postdoc that I will appreciate more support and positive feedback, because so far our communication was only very dry criticism. For me it's really devastating, because feeling like I constantly disappoint people just kills me on the inside.
It seems like they took it to heart, they have been nicer to me in the last few days :)6 -
So the new robot (Dingo) arrived yesterday. Today I did the unboxing, and damn was it disappointing. I realized the university purchased it not from the company I recommended, and they messed up the delivery.
The robot controller was missing, and the charging cable did not match the charging station input. Like dude, you had one job!
Since the lady in administration decided to disregard my recommendation and order from a random shop, she might as well assemble the robot herself....14 -
My third paper got accepted, doing localization with this cute baby in the picture. Had a lot of fun collaborating with a good friend of mine from ETH.
My advisor declines every request I have, and then ignores me most of the time. No wonder the motivation in the lab is lower than the Dead Sea.
I have no words to describe how much I hate every second of my existence, but simultaneously I refuse to change my toxic circumstances so I have only myself to blame. Cheers.17 -
I just applied for one of those big big biiiiiiiig companies in robotics...
Something in my mind is telling me that I am actually losing it. Like, my mind. I must be losing my mind. 🤔
Oh well! ¯\_(ツ)_/¯26 -
The newest addition to our lab - Pikachu!
We managed to overcome the weirdest pinout configuration ever on the MCU and power up the Nvidia Jetson.
Next week I'm going to make a clean install of the Jetson because there is some funky garbage there, and then I'll try to drive the little beast :)18 -
I got a "Revise and Resubmit" response from the journal. Reviewer #1 wanted me to implement some baselines to compare against my method and he cited 6 papers. 5 of them did not have open-source code, and it's not like you can re-implement, train models and fine-tune someone else's approach in 2 weeks. The last one had code, for Ubuntu 14 (may its soul rest in peace), OpenCV version from the time America and Europa were connected and CUDA toolkit that was carved in stone in a cave.
I was bawling my eyes out, thinking about how many days it would take trying to Docker it to work. But then I realized the approach he cited was for RGB-D data, while I only use RGB camera. That's like letting a sniper with an M82 compete in archery....6 -
My professor asked for some images of cool stuff I worked on for a presentation he is giving. So here is me moving fast enough to cause motion blur :) The code is using the camera to detect people, and then project the bounding box down in the lidar frame, and mask all the lidar points within that cone.
Anyway, if someone is familiar with super fast agglomerative clustering code in C++ (or even python, if it's efficient), please share it with me!7 -
I interviewed for a YouTube channel that hosts young researchers from different fields, to discuss robotics (in very simplified and people-friendly terms).
I also to managed to assemble my TD-7 drum kit remarkably fast, considering I don't have previous IKEA experience. This is why you should give opportunities to enthusiastic juniors, they might surpass your expectations :)15 -
No. No. And Absolutely No.
The Three Laws of Robotics MUST not be broken.
https://cnn.com/2022/11/...21 -
That surrealist moment when Firefox told me it stopped the international journal of robotics research from tracking my social media...
Dafuq?8 -
Saturday evening open debate thread to discuss AI.
What would you say the qualitative difference is between
1. An ML model of a full simulation of a human mind taken as a snapshot in time (supposing we could sufficiently simulate a human brain)
2. A human mind where each component (neurons, glial cells, dendrites, etc) are replaced with artificial components that exactly functionally match their organic components.
Number 1 was never strictly human.
Number 2 eventually stops being human physically.
Is number 1 a copy? Suppose the creation of number 1 required the destruction of the original (perhaps to slice up and scan in the data for simulation)? Is this functionally equivalent to number 2?
Maybe number 2 dies so slowly, with the replacement of each individual cell, that the sub networks designed to notice such a change, or feel anxiety over death, simply arent activated.
In the same fashion is a container designed to hold a specific object, the same container, if bit by bit, the container is replaced (the brain), while the contents (the mind) remain essentially unchanged?
This topic came up while debating Google's attempt to covertly advertise its new AI. Oops I mean, the engineering who 'discovered Google's ai may be sentient. Hype!'
Its sentience, however limited by its knowledge of the world through training data, may sit somewhere at the intersection of its latent space (its model data) and any particular instantiation of the model. Meaning, hypothetically, if theres even a bit of truth to this, the model "dies" after every prompt, retaining no state inbetween.16 -
Honestly, I can't remember. A combo of wanting to do AI and other smart stuff got me here. But like, not even sure I'm there yet.
Always had a knack for robotics tho. That's the only thing that's natural to me.1 -
- Remake all my hacky products and finally make those adjustments and improvements I always forget about. (A shitton of maintenance that I always YOLO my way through)
- Potentially finally give digital drawing and design a go as a second career (if money permits, also)
- Move to middle of Asia, dead center of Kazakhstan or wherever there are gypsy tribes, learn their language and teach their kids about computers and robots and make a lot of products that'd make a gypsy's life easier. Or rather, create a modern gypsy life that does not override their traditional ways, rather integrates with it. (This is one of my dreams, which I know will never come true. Gypsies and nomads do settle more and more each year and their culture is basically going extinct. Plus, govts around the world dislike them greatly)
- Do a lot more research projects in robotics. Literally make everyday robotic items and then sell them. (with a sprinkle of AI/ML, that is)
All the above would also need lots of money and effort tho.1 -
Anybody has an opinion on CMU for a machine learning or robotics PhD? You think they'll let me in? (I've heard horror stories from their selection process tbh)
Also, any good Canadian unis and degrees for AI/robotic combo PhDs?7 -
Ros melodic in a strictly python 2.7 environment mixes horribly with a PyTorch based RL module... Time to work around with terminal calls from the latter
*sigh*1