Combination Report Einstein Robot



Download 10.84 Mb.
Page13/41
Date conversion08.07.2018
Size10.84 Mb.
1   ...   9   10   11   12   13   14   15   16   ...   41

Figure 3-11: Editing the system variable to make the VSA program accessible from any directory.

3.3.4 Complete Code

The code itself combines is fairly simple, and all included in the same main function. For the full version of code, see Base Appendix.
4 Group interaction

For visual idea on how each group interacted with each other from a hard ware perspective, please see Fig 4-1.





Fig 4-1

The PSP controller sent outputs wirelessly to the NXT Brick, which was polling for them. Depending on the output, several actions may have occurred:



  • The directional pad would have sent a command the servo controller. It would have done this via the virtual keyboard adapter, which would have input the data to the laptop through the USB port. The laptop would have then serially sent a command to the servo controller, which in turn would have powered certain servos giving our head an expression.

  • The left analog joystick would have sent Cartesian coordinates to the NXT brick, which would have been interpreted as power to either the left or right motor.

  • The right analog stick, the right keys, and all the left and right buttons (essentially the remaining controls) would have sent commands to the arm.

All of the interactions amongst the robot essentially relied entirely on the inputs provided by the driver. The most challenging thing about the different groups and their respective code was sharing the PSP controller, and clearly communicating what buttons accomplished which action


The only data going from the laptop to the NXT brick was the initial program download. After the program becomes downloaded, the NXT Brick can run the code independently of the laptop.
5 Knowledge to be carried over to ECE479/579
One of the more ambitious goals that we have is to make an autonomous robot perform the same tasks we performed this semester (fall 2009). This semester we used our wireless control to navigate our robot to pick up a can, and deliver it to a predetermined location. Our goal is to use what we learned in our class this semester, to make it autonomous next semester.

Mapping


Our robot base will have to have some kind of interaction with the real world. We are opting to use two cameras for stereoscopic vision, sonar, and perhaps infrared and other sensors to map locations and objects of interest in the room.
Another challenge would be at the "Near object of interest" point. We would use the sensors to collect data, and identify potential collisions and plan ahead.
The goal would be to map out paths as accurately and as fast as possible. We would most likely implement some kind of grid like structure for the environment. Each sensor could generate a map, and a grid. A union of the grids would yield a worst case map, and we could implement some navigation algorithm's to get around obstacles.



Fig. 22 Sample Mapping algorithm
Fig. 22 shows a sample mapping algorithm that we could use. While it might not be perfect for our application, we can use most of it to accurately map almost any 2-D environment. We can utilize our sensors (cameras/sonar) to find landmarks. This will most likely entail Image Processing, which will be covered below.
Another mapping option would be to make a possible Braitenberg vehicle. While not necessarily as simple as the classic implementation, we can find wave-length specific sensors and use that for "rough tuning" the orientation of the robot. This will save on processing power for the image processing, and allow the robot to come to goaled objects faster while expending much less energy.

Genetic Algorithms


Ultimately, we would like to implement a Genetic Program that will learn the shortest path to each destination, and navigate accordingly. It would navigate by moving, then asking itself, "Am I closer or farther than where I was when I started?" It would use this data to find the optimal path. This is one of many potential applications for genetic programming in this project.
Evolutionary algorithms (genetic algorithms fall under this grouping) make use of principles from Darwin's theory of Evolution. Genetic algorithms operate by iteratively evolving a solution by looking at a whole spectrum of possible solutions. In this case, our spectrum of solutions is presented every time we move. We would be required to apply fitness functions. By tuning the parameters of the fitness functions, we can optimize our path efficiency.

Since our robot will be proceeding based off data from its sensors, we will need to implement some kind of system that allows for possible blind (no goal in sight) navigation. Once it sees the goal, it can process the shortest path by looking at the mapping it has already accomplished, and deduce the most efficient path to the object of desire from its current position, and its starting position.


By keeping a constant record of objects in its mapping, we can utilize the two systems to find the best path to the goal.

Image processing


Real time image processing is the most comprehensive tool we can use for recognizing variables in an environment that the robot needs to interact with, from path following to object detection and recognition and more. Since all the environments that the robot is expected to function in are primarily defined by straight line edges we can use these to help define area boundaries when mapping the relative environment as well as defining anomalous obstacles where there are absences of said straight edges.
As one of our example objectives is to pick up a can and deliver it to another location, color object detection and image segmentation can be utilized. For example, let our desired objective be an unused disposable red party cup. We assume that the chance that any red object of a specifically defined size is unlikely to be anything other than our objective. Size relative to distance calculations can be quickly formed using a third sensor designed specifically for gathering distance data, either infrared or sonar are widely used alternatives but a laser based approach would be even more accurate for determining the distance of a specific object. We could then reference with an infrared based image to check whether the cup is hot or cold, indicating that it may be in use, or if it is room temperature, and valid for retrieval.
While the NXT Intelligent Brick has fairly advanced memory and processing capabilities, mapping a 4 dimensional color table (ir,r,g,b) for object detection is memory intensive, even if we truncated the least 3 significant the bits we still need to be working with 1MB of memory. However, since the infrared range is not critical for real time information we can call up the image on an as needed basis and reduce the segmentation array to a much more manageable size.

1   ...   9   10   11   12   13   14   15   16   ...   41


The database is protected by copyright ©dentisty.org 2016
send message

    Main page