Show HN: I made a new sensor out of 3D printer filament for my PhD
886 points| 00702 | 1 year ago |paulbupejr.com | reply
I've been on HN for a while now and I've seen my fair share of posts about the woes of pursuing a PhD. Now that I'm done with mine I wanna share some anecdotal evidence that doing a PhD can actually be enjoyable (not necessarily easy) and also be doable in 3 years.
When I started I knew I didn't want to work on something that would never leave the lab or languish in a dissertation PDF no one will ever read. Thanks to an awesome advisor I think I managed to thread the needle between simplicity and functionality.
Looking back, the ideas and methods behind it are pretty straightforward, but getting there took some doing. It’s funny how things seem obvious once you've figured them out!
Oh, I love creating GUIs for sensor data and visualizations as you'll see -- it's such a game changer! pyqtgraph is my go-to at the moment - such a great library.
[+] [-] raphman|1 year ago|reply
[+] [-] smoyer|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] OisinMoran|1 year ago|reply
The research I did before my current job touched ever so slightly on this too, so even cooler to see it on the front page. What we were doing was using complex valued neural nets to learn the transmission matrix of an optical fibre. It was previously done in the optics community by propagating Maxwell's equations, but we were able to beat the state of the art by a few orders of magnitude with a very simple architecture (the actual physics just boils down to a single complex matrix multiplication!). The connection to your work here is that if the fibre is bent you have to relearn a new matrix. It could even be possible to learn some parameterized characterisation of the fibre, so you could say do some input/output measurements and use that to model a spline of the fibre. We did not get that far though!
Here are the papers if you're interested:
CS-focussed one: https://papers.nips.cc/paper_files/paper/2018/hash/148510031...
Physics-focussed one: https://www.nature.com/articles/s41467-019-10057-8
[+] [-] spiderxxxx|1 year ago|reply
[+] [-] jjk166|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] klysm|1 year ago|reply
And there you have it! The difference between a miserable experience and a good one
[+] [-] 00702|1 year ago|reply
[+] [-] jallmann|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] ericdfoley|1 year ago|reply
That is also used for various industrial applications, e.g. for strain sensing by Luna Innovations. I know that Schlumberger has various patents on fiber-optic sensing relating to towed streamers (e.g. for marine seismic acquisition.) But I haven't seen it used for soft robotics before.
[+] [-] 00702|1 year ago|reply
[+] [-] crote|1 year ago|reply
Your sensor data seems to have quite large "dead zones" - those should be trivially fixable by reducing the inter-sensor distance, right?
Would it be useful to sense the direction of the bend? I reckon this might be possible by dividing the tube like a Mercedes logo, and having three sets of the sensor in one outer tube.
Is there a way to sense multiple bends? With the current setup that'd result in invalid readings as you're essentially OR-ing the value. Are there any good solutions for this?
[+] [-] 00702|1 year ago|reply
[+] [-] beachy|1 year ago|reply
The idea being to create a golf launch monitor that doesn't require hitting a ball, so you can play sim golf inside. Think playing alongside the Masters as you watch on TV in the lounge - without smashing a golf ball through your TV.
I am wondering if this could be suitable (or a number of them ganged together).
[+] [-] cglace|1 year ago|reply
[+] [-] speps|1 year ago|reply
As mentioned, a high FPS camera along with the Kinect tech to extract a skeleton would work so much better. You could make that in your garage using a PlayStation Eye and existing open source tech.
[+] [-] risenshinetech|1 year ago|reply
[+] [-] schrectacular|1 year ago|reply
[+] [-] downrightmike|1 year ago|reply
They put light down a tube and then measured the light to trigger a key press. That's why bending your fingers/hand did anything. The revised the mechanism in the later generations.
[+] [-] Doxin|1 year ago|reply
[+] [-] robryk|1 year ago|reply
At the beginning you mention a ToF sensor, which made me think that you're looking at reflections from the bends and measuring distance to them, but this seems not to be the case. ISTM that if you bend the sensor in two places, you'll simply get the sum of the logattenuations from both. If we assume that the "strength" of the bend continuously changes attenuation, ISTM that you need as many strands as there are gap locations to be able to disambiguate between any two sets of bends.
Am I misreading something or is this intended to operate in cases where we know only one bend is present?
[+] [-] mariusor|1 year ago|reply
[+] [-] riedel|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] Prcmaker|1 year ago|reply
[+] [-] bythreads|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] karambahh|1 year ago|reply
My reasoning is that you'd increase the resolution without adding too much technical complexity.
My maths is too rusty to evaluate how it would mess with the gray code though.
Very nice idea
[+] [-] 00702|1 year ago|reply
This can certainly be miniaturized with the right manufacturing techniques but I left that for the future.
[+] [-] wholinator2|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] skoocda|1 year ago|reply
Alteratively, could you use a short segment of colored cladding that allows certain wavelengths to leak out more than others? I think that would allow you to encode each bend point as a different color-- which might require a different (more expensive) rx sensor, but could be useful for certain applications.
[+] [-] 00702|1 year ago|reply
There is already existing work that uses colored segments for something similar but those techniques are hard to do outside a well equipped lab.
[+] [-] knodi123|1 year ago|reply
[+] [-] teucris|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] moralestapia|1 year ago|reply
Definitely try to explore the commercial side of your invention.
It wouldn't hurt to talk to an IP lawyer, if you're still in Uni they usually have people there doing this and you can just go talk to them, free of charge (for you!).
I'm generally against the idea of patents, mainly because of people who came to know to game the system and exploit it (patent trolls etc...) Your project is a real thing with real applications, you definitely deserve a share of whatever commercial benefit this could bring to the world, :D.
[+] [-] 00702|1 year ago|reply
[+] [-] kobalsky|1 year ago|reply
[1]: https://en.wikipedia.org/wiki/Optical_time-domain_reflectome...
[+] [-] porphyra|1 year ago|reply
[+] [-] Netcob|1 year ago|reply
[+] [-] 00702|1 year ago|reply
[+] [-] Cerium|1 year ago|reply
Relevant patent: https://patents.google.com/patent/US20240044638A1/
The first time I saw one of these in person I was in awe. You could take a normal looking cable (think bicycle cable sleeve) and bend it and see in real time the same shape on the display.
[+] [-] nuancebydefault|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] Prcmaker|1 year ago|reply
Well done completing the Phd!
[+] [-] 00702|1 year ago|reply