We are currently maturing the existing prototype of the evolutionary robotic platform, SplotBot – a description of which can be found under publications, to make it reliable enough to be used for extensive autonomous experimentation over long periods of time.
EvoBot combines elements from open-source 3D printers and modular robotics to provide chemists, micro-biologists, and artificial chemical life researchers with a cheap, extendible, and open-source robotic liquid handling platform. EvoBot is in particular designed for experiments that require continous interaction. EvoBot is mechanically organised in three layers:
1) an actuation layer containing a head which can move in the horizontal plane. The head itself may contain various modules such as actuated syringes, grippers, and other tools,
2) a transparent experimental layer where petri dishes, micro-wells, and other vessels are placed,
3) a sensing layer consisting of a camera able to monitor the experimental layer from below.
EvoBot’s electronics and control are adapted versions of open-source 3D printer electronics and software. We performed experiments with EvoBot to determine its max speed of 180mm/s and position precision of +-0.1mm. We also present a mock-up of a use case involving the refueling of a microbial fuel cell. The specifications of EvoBot makes it suitable for experiments in wet labs. We hope EvoBot will aid the advancement in chemistry, micro-biological, and artificial life.
EvoBot consists of one structural frame and three types of layers which in the default configuration are organised as follows: the top layer carries actuation, the middle layer is the experimental layer, and the bottom layer is an observation layer. However, this default layout can easily be changed, e.g. several experimental layers can be introduced if a cascading experiment is under investigation. These three layers can be easily moved up and down in the frame. Functionality is provided in the form of modules which allow functionality to be added incrementally in the form of new modules.
The actuation layer contains a head, which can move in the horizontal plane using two belt and pinion mechanisms and two stepper motors. Up to 11 modules can be mounted on this head to provide different functionalities. At this point only an actuated syringe module has been implemented, but various actuators and sensor modules are envisioned, e.g. temperature sensor, pH sensor, gripper for manipulation of dishes, extruder for printing reaction vessels, etc.
The experimental layer is essentially just a frame with a glass plate where vessels can be organised as required by the specific experiment. There is a hole in one corner where vessels can be moved and dropped for automatic disposal and together with a petri dish dispenser system under development a large number of experiments can be done in sequence.
The observation layer is essentially the same as the actuation layer except that modules cannot interact physically with the experiments above because the glass plate shields them. This limits the useful modules for this layer to modules that do not directly manipulate the experiments. Currently, a static webcam is used in the observation layer to monitor the experiment and provide feedback to the robot. However, in the longer term thermal imaging, magnetic stirring or the like could be integrated in modules for the observation layer.
The syringe module has two degrees of freedom. It has a linear stepper motor (a stepper motor with a lead screw and an internal nut) for moving the plunger up and down and a rack and pinion mechanism with another stepper motor for moving the syringe up and down. The syringe can easily be replaced given the user the opportunity to use a syringe that match the experimental requirements.
More modules are under development including a module to hold an OCT scanner, a module with a gripper, and a module to measure pH.
The goal of the software effort was to provide the end user with a simple programming interface to the robot. The software has a host side and a robot side. The host side communicates with the robot side over a serial usb-connection and the robot side software is a modified version of the Marlin firmware used in open-source 3D printers.
On the host side we have chosen Python as the implementation language as this was the language with which our collaborators have most experience and also due to its simplicity. The software is divided into a computer vision calibration program, a graphical user-interface for manual control of the robot, and a simple application programming interface.
The tracking software consists of two parts, namely identifying the desired type of blob, and tracking this type through the application programming interface. To identify a certain blob, first the coordinate system of the camera should be calibrated with the coordinate system of the robot. To this end, the transformation matrix is calculated, simply by having the user click on the tip of the syringe needle in a camera feed at three different positions. As the user is free to place different petri dish sizes anywhere on the experimental layer or set the distance between the experimental layer and actuation layer as desired, a calibration program is designed to specify the appropriate points for maximum accuracy of calibration. In this program the user defines the location of the center of the petri dish using the manual control program, the diameter of the petri dish, and also the distance between the experimental layer and actuation layer. Next, the desired blob should be identified. There is a predefined color set to help the user choose the desired blob. In case of arbitrary and uncommon colours, or unconstrained lighting conditions with severe variations, the user can find and set the HSV parameters of the blob by a few mouse clicks on the video feed from the blob identify program. This enables the user to accurately track any color. All the values obtained during blob identification are used when instantiating the camera object by the application programming interface.
The application programming interface is also kept as simple as possible. The interface gives the programmer access to moving the robot head, moving the syringes and plungers, and inquire about the positions of these elements.
We instantiate the vision software module with the parameters found during vision calibration. Once, instantiated the vision module returns the position of droplets in the robot’s coordinate frame. Hence it is possible for the robot to move to a specific blob and inject and extract fuel.