× Home About Blog Mechanical Electrical Software Process Gallery Team
☰ Menu

Welcome to the software subteam!
All our code can be found on our GitHub
The main components of software was as follows:

Control Code

In order to fulfill our task of gripping an object, control code had to be written. We used a microcontroller (the Arduino Uno R3) in order to power our motors and various other components, and also receive sensory input on the various sensors we integrated into the system.

Across all of our sprints, this control code had basically the same structure. We have servos connected to each of the fingers, so we code the servos to start upright and reach it’s maximum gripping capacity. In order to first start gripping, we use an IR sensor in order to detect whether an object has been placed in the palm. The code sets an IR sensor threshold (determined by placing an object as close to the sensor at the point we want to start sensing), and if the threshold has been met, that means that there is an object. At this point, it starts gripping.

However, when does it stop gripping? There are touch sensors in the fingers for this very purpose. We also set a threshold for the touch sensor (set by determining what value matched up with the max force we wanted to exert - and this max force, to us, signified a grip). When it reaches this value, it stops continually gripping and stays at the same position.

Actuating the servos was not as simple as actuating them all at once - we couldn’t do that because then the fingers would collide, preventing complete finger closure. As a result, we had to stagger the finger movement for a working gripping motion.

Lastly, due to mechanical limitations with 3D printed fingers, each tolerance for the finger movement in terms of motor movement was slightly different. Thus, we had to adjust our PWM into the servos based on this, in order to have a more reliable overall system.

Mechanical Simulations

In order to better understand and visualize the movement of our mechanical system, we created physics simulations for a select number of the “pulley finger” designs we had ideated. These simulations were constructed in MATLAB and Mathematica. They used physics derived from the various research papers we had read in order to create them.

Our reason for focusing on the simulations was that we realized that at best, we had a tentative grasp of the physics on our system. We were able to generally explain what was happening when each finger was gripping, which was okay, but iterating on the finger further in a way that was actually conducive to the outcome of the project required a more intensive knowledge behind the math behind the finger.

During our research-only sprint, therefore, we dedicated time into creating simulations. This ultimately helped understand where we would want to place holes for strings and how those tension in order to cause movement in the finger. Since our finger is essentially bioinspired by a human finger, our finger design relied exclusively on how our “tendons” (strings) affected the “digits” (each of the joint parts).

These simulations were ultimately used very minimally, in the sake of honesty, due to the lack of time the mechanical subteam had before iterating on the finger design. However, they still allow us to get a more comprehensive view into the mechanics of the system.

Website

After some point, it made sense for us to start looking at the website as a more concrete software deliverable. So, we made sure that we focused on its completion - especially after we explicitly warned that while working on the actual project is cool, the website is a large deliverable and deserves some time spent to it.

We started off with a simple website template and then moved onto edited it to fit our criteria and our vision. Although we originally thought this was an easy task, it turns out that website creation is a lot harder than it seems to be. Thus, it actually took a fair amount of time to create the website. However, this work paid off, and we got the website looking the way we wanted it to look.

Part of our website design included blog posts in order to better document our process. We felt that this was the best way to iteratively go through our progress through all the prints - so, we were able to write some blog posts as well (this wasn’t a software-only task, however).

The results of this Software deliverable can be seen on the very same website you are looking at right now!

Computer Vision

Computer Vision was much of an iterative process, but unfortunately a lot of the iteration was more ‘horizontal’ rather than vertical. This means that we made many changes to the scripts we used, but this was in the form to make another script that would serve the same process, but better. We could keep on recording until we found something that worked - whereas a more efficient ‘vertical’ process of iteration would have been to continuously build up on the same script until it works.

Our first script was essentially to stop and start the system based on a ‘flag’ - if a green flag appear, then it would start. If a red flag would appear, then it would stop. This was an attempt at building up to object tracking.

The good thing about starting with a ‘start-stop’ script in OpenCV is that it allowed us to use a lot of important CV libraries and techniques to help us really familiarize ourselves with OpenCV, especially OpenCV syntax in C++. It also helped to accustom the team with coding in C++, which all of us have done, but still needed some further development in terms of being completely intuitive.

The ‘start-stop’ actuation had three main function blocks: detecting red, detecting green, and a main class to pull everything together. In the detecting color blocks, we first converted the image into HSV, just to standardize the color detection. We used the cvtColor function for this. Then we made use of the inRange function, defining the HSV color ranges for the color we wanted to detect. This was fairly straightforward to detect green. However, detecting red was a little less straightforward - because its HSV ranges actually loops around.

In order to actually get a concrete boolean flag out of the image to signify whether it detected ‘enough’ red or ‘enough’ green, we used the binary image that inRange released, and counted up the number of white pixels within the binary image (the white represented the pixels within the color range we specified, and the black were the rest of the pixels). This proved to more difficult than was expected, mainly with the difficulty parsing through the pixels to get the color ranges (we had to do some casting to integers to get it to work, which initially we did not). We also was using the wrong range - we were using the HSV ranges to detect white, but the image was in actually in the BGR color scheme.

From here, it was as easy as using these functions we had created within the video, and sending out the flag based on camera input.

Our next script was object tracking. The final object tracking script used the pre-made object-tracking library within Open CV, and worked better than previous attempts. The actual code was very easy - using the Open CV library and making a pre-made box move around to the tracked object - but one of the reasons that this library was more preferable was that it actually had the capability to utilize multiple different algorithms, each with their strengths and pitfalls. For instance, some of them were really good at detecting failures, but had a reasonable delay when actually tracking the object. Others detected an object very well - even when the object was moved really quickly, but wouldn’t recognize when it was moved out of frame. To integrate the logic of tracking an object and detecting when it was off frame (so that the arm would move up or down in order to find it again, and also only grip when it was within range - this was utilized due to other difficulties, but it has this capability).

Interfacing with Serial

One of the main pitfalls of our team was attempting to interface with Arduino Serial in order to have actual actuation based on object tracking. Although the software subteam was familiar with working with Pyserial and both sending and receiving values through Python, interfacing with Arduino via C++ was another beast to tackle entirely.

The first option we pursued was to interface with the arduino without any particular libraries. Using built-in C++ libraries we were able to successfully and reliably talk from computer or raspberry pi to arduino. We hadn’t looked into talking back to the computer, but we didn’t really need that for this project. The problem with this approach was that it was not live. The program worked by writing information to a file and then sending that file over the serial connection to the arduino. This meant our sent information wasn’t tiny packets of bits at a time, instead there was a lot of overhead with the creation of a file and breaking that down into serial communication. We determined this was too slow for our purposes.

Our second option we pursued was Boost, which was a way to communicate between Python and C++. If we got this working, then we could just use the same Pyserial that we were all familiar with to interface with Arduino. This was pretty promising as our initial tests using this direction were quite fruitful. Creating a class in C++ to generate messages to pass to python worked really well, and pyserial with said class functioned without a hitch.

The problem we encountered started when we actually started trying to implement computer vision with boost. Opencv 2 wouldn’t play nicely with the boost module due to the classes it creates. Specifically the “Mat” and “Vec3b” classes caused the boost module not to be able to compile. After looking through numerous forums and stackoverflow posts, we discovered that part of the problem was that we needed to tell boost that it was compiling for opencv, and create class constructors to convert these Mats and Vec3bs into numpy arrays. However, there wasn’t much good documentation for this. We attempted but had a lot of trouble figuring out how to connect these pieces. Finally we managed to get Mat to compile correctly, but couldn’t figure out Vec3b. After a lot of painful debugging, we determined this method was a dead end at least for now, we needed a bit more experience with c++ in order to get this working correctly.

Our third option was a library called LibSerial. This library is specifically geared for doing serial connection through c++. Many people seem to use this as a solution to the problem, however the main site for the library appeared to be unfinished. This did not inspire confidence.

We attempted to work this library by following the instructions on the site for installation, however when we got to using the “make” command in order to build and prepare the library for installation. This didn’t work, due to needing to be built with the g++ -std=c++11 flag. However, when changing the makefile in order to use this flag, it didn’t update the make command. Needless to say, the whole process was tiresome. Eventually we realized we could use sudo apt-get install to install the library and were relieved. After running the installation we attempted to run some of the examples and walk through the tutorial, however the library didn’t seem to function correctly, and nothing we did seemed to get anything close to talking to the arduino. The scripts inside of the library felt like they were missing constructors, so we came to the conclusion that the sudo apt-get method might have been outdated or something along those lines.

In the end, we didn’t manage to get anything useful working after a long and stressful exploration of multiple methods to talk to the arduino.

Firmware

We utilized two important pieces of firmware - the Arduino and the Raspberry Pi.

The Arduino was really important because a) it was required from the class, but also b) it was the easiest way to control the Servos and linear actuators we used to actuate our arm, but also it was the easiest way to receive sensory input from our system. And that’s how we used it! Our control code is written exclusively through Arduino, and can be found mentioned in our Software Subsystems (along with a link to our Github repository that has a plethora of code). We used the Arduino to power and ground the IR sensor and the touch sensors, as well as receive sensory data from them. We also powered and grounded the Servos and the linear actuators through the Arduino, and wrote values to them in order to get them to move.

The Raspberry Pi was also an integral device for our project. The Raspberry Pi allowed us to disconnect our computers from the machine without losing any major functionality. This was super important for making a standalone arm. It helped to add to the professionalism and aesthetics of the project. We kept the c++ computer vision code on the raspberry pi and used it to talk to our camera and arduino , allowing for more complex control of our system with image processing.

Dependencies