1. Tensorflow object detection needs some dependent libraries to install before it can run, including Tensorflow itself. Follow the instructions here to install and test these libraries:
** Be sure to install tensorflow 1.13.0 by issuing the following command in the terminal:
pip install tensorflow==1.13.0
Otherwise you will install a later version of TensorFlow that is incompatible
2. Now that tensorflow 1.13.0 is installed, clone the Tensorflow models library version 1.13.0 from GitHub:
With Tensorflow 2 officially released, we want to make sure we clone the 1.13.0 models repository. You can clone the specific version of the models repository using the following commands in terminal:
git clone https://github.com/tensorflow/models
git checkout v1.13.0
commands from now on must be done from the models/research folder.
4. First, download the sample dataset from the PlantVillage Dropbox using the following link:
5. Create a text file called label_map.pbtxt using the following content:
6. Create a text file called trainval.txt and list all of the image filenames (without extensions). Your trainval file should be structured like this:
7. Set up folder structure as follows:
+ data: contains the the label_map.pbtxt file, annotations folder, and images folder
+ data / images: contains the image files in jpg format
+ data / annotations contains a folder xmls and a text file ‘trainval.txt’
+ data / annotations / xmls contains all of the annotation files created in LabelImg, ‘trainval.txt’ contains the list of files and corresponding object label ID
+ model: contains the folder ckpt, eval, and the config file
Now we're ready to create our TensorFlow Record files!
8. Run the following command to create your own TF record files:
** Note make sure to run the command from the /models/research directory
python object_detection/dataset_tools/create_pet_tf_record.py \
9. Download a pre-trained object detection model checkpoint from Tensorflow model zoo. For example, you can download the ssd_mobilenet model v2 pre-trained on the coco dataset.
10. Download the corresponding model config file and update the paths within the file (change PATHS_TO_BE_CONFIGURED) to match the local paths to your data.
11. Change directory into the models/research directory:
12. Generate the protobuf files:
protoc object_detection/protos/*.proto --python_out=.
13. Export the python path:
Now we can start training!
14. Start a training session by running the following command:
** Be sure to replace /path/to/.. with the actual path to the directory or file
python object_detection/model_main.py --alsologtostderr \
If you would like to monitor the progress of the training session, you can use TensorBoard by issuing the following command in Terminal:
** As always, be sure to replace /path/to/ckpt/ with the actual path to the directory with the ckpt event log(s)
Learn more about your crops in our library
Learn about ways to keep your crops healthy