Because RobotC might be limited with image processing and sensor integration, we may need to migrate our platform to NXT++. In addition, in order to accommodate more sensors than ports that we have, it might be wise to migrate over. Another reason we are thinking about switching over is because of the experience some of the members have had with image processing. All in all, depending on the research that we do into the ability of RobotC, we may stick with it.
We want to implement a stereoscopic camera system. There are significant technical considerations that are involved in order to determine distances and sizes using stereo image processing. We hope to be able to use these cameras for image recognition and mapping.
We would like to use sonar sensors to create another map grid. These will have a very similar purpose to the cameras, but will mainly be used for obstacle mapping.
Utilize the grids generated by our sensors, and explore until an object of interest is sighted.
Interpret data gathered and look for objects of interest. There will be many avenues to take, and we will need to research this throughout the semester before implementing it.
Once we have located an object of interest, find the most efficient path and implement it.
Infrared Sensor/Laser Sensor
Infrared sensors provide a narrower area for distance detection which is advantageous for detection of distances for a specific object or target. However, many scenarios exist where interference can cause poor readings or surfaces that may not reflect in the IR range. A laser based measurement sensor would work very well in this scenario but is much more expensive to implement. We can union this with our other map grids generated with other sensors.
6.2 Head "To-Do" List
For the continued development of the head, there are several areas that we are considering looking into. We would like to eventually have the head to have more of a personality and greater range of emotion then it currently has. We feel this can be done through the use of genetic algorithms and/or genetic programming – though it is possible that evolutionary programming might be the best way to allow the robot to develop its own emotions.
We would like to save snippets of motion into separate files that can then be called within a lisp tree. Then by adding a camera for vision to allow the robot to recognize facial expression, the robot can then learn which types of facial expressions would work best. This might be done through mimicry for some of the basic emotions.
A large population of emotions would be needed to compete in some way, then a fitness function would then have to be developed to find those that perform best allowing those to continue, while removing the lowest performers. The most difficult part of this process would be defining the fitness function. In fact, the fitness function would probably derive its feedback from human input as the robot would have no way of knowing whether it is producing and acceptable expression or not. The fitness function could be as simple as a human observing the results of a genetic modification and giving it a 'thumbs-up' or a 'thumbs-down' for feedback into the system. The feedback might also be multi-valued such that an expression could be rated 'good' 'bad' or 'invalid expression'. It could also be rated (say, 1 to 5) for relative degree of success.
Whatever the fitness function properties, the fact that it will probably have to be derived from human input will severely limit the amount of generations that can be expressed in a reasonable amount of time. The best use of time may be to have basic behaviors mimicked (as described above) and then have some of the parameters genetically evolve with restrictions to only very slight changes. This could give rise to a nice library of variations of behavior such as "happy smile", "sad smile", "grimace", "gleeful smile", "snickering smile" and similar variations. The variations would have to be categorized by humans and a tree of related emotions built to where there may be several inputs available and the robot may be able to choose the "happy" category, followed by "grimace" if, say, a sensor indicated that it bumped into an unexpected object.
This technique is currently being done on a “simpler” style of robot, than the robot we are working on. Those that perform worst are removed from the population, and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This can happen without any direct programming of the robots by the researchers. This method is being used to both create better robots and to explore the nature of evolution. However, since the process often requires many generations of robots to be simulated, this technique may be run entirely or mostly in simulation, then tested on real robots once the evolved algorithms are good enough. (Patch, 2004)
For behaviors, most of the algorithm probably couldn’t be simulated with the tools that we currently have on hand, so instead the robot would have to have feedback from us for what emotions were acceptable and which are not.
6.3 Arm "To-Do" List
Future work that can be performed on this mechanical arm includes adding sensors. These could include sonar range finder, stereo (two) camera vision, or even an experimental smell detector. This would allow automation of the entire robot
Specify End-effecter Position
Additionally, and more specifically for the mechanical arm, future work could involve solving the inverse kinematic equations for all degrees of freedom in the arm. This would allow the user or an automated intelligent program utilizing sensors to specify a position and orientation that the hand should be in. All the angles of rotation for each motor and servo would be automatically calculated and moved to that position.
With the addition of sensors, the arm and hand could potentially utilize trajectory planning. This would entail sensing an object coming toward the robot, calculating its speed and trajectory, and moving the arm and hand to a position along that trajectory to potentially deflect or catch the incoming object. The movement of the arm and hand would have to be sped up and position control accuracy would have to be increased for this to be possible.
Genetic Search Algorithm
As long as memory in the NXT brick allows, genetic algorithms could be implemented to allow for room mapping and searching. This would allow the robot, and more specifically the arm, to know the position of objects in the room and potentially interact with them.
Image processing would be an essential upgrade with vision sensors so that the incoming data could be interpreted properly. Intelligent processing would allow more accurate readings and would provide optimized responses.
Note that all the following files were placed in the C:\VSA directory
happy.wav - sound file for happy, a laugh.
happy.vsa - the motion file for neutral - happy -> sad as one continuous motion
stay_happy.vsa - the motion file for a neutral -> happy transition. Does not reuturn the head to neutral.
shrthppy.bat - calls VSA with happy.vsa to produce the neutral->happy-> sad motion sequence
contents of file:
vsa "happy.vsa" /play /minimize /close
longghappy.bat - calls VSA with stay_happy.vsa to produce the neutral-> happy state.
Contents of file:
vsa "stay_happy.vsa" /play /minimize /close
neutral.vsa - motion sequence file to reset all servos to the neutral position. Tends to be jerky since there is no velocity control for return to neutral when the present state is unknown ahead of time.
neutral.bat - calls VSA with neutral.vsa to reutrn the head to neutral. Used after longhappy.bat and longsad.bat
contents of file:
vsa "neutral.vsa" /play /minimize /close
sad.wav - sound file for sad emotions, a groan
sad.vsa - the motion file for neutral - sad - neutral as one continuous motion
stay_sad.vsa - the motion file for neutral -> sad transition. Does not return to neutral
shortsad.bat - calls VSA with sad.vsa to produce the neutral-sad-neutral motion sequence
contents of file:
vsa "sad.vsa" /play /minimize /close
longsad.bat - calls the VSA with stay_sad to produce the neutral-> sad state.