Mentors, trainers, and allies

Our goal is that our core portfolio of standardized training materials are delivered by female experts recruited from the local region, who we further engage in mentoring and leadership activities. By centering these role models throughout our programming, we provide our participants with a vision of their future careers. We strive to foster an inclusive environment for honest dialogue and encourage ongoing mentorship even after the program concludes. However, the very gender gap we aim to address often presents a challenge when it comes to recruiting female educators and role models for our programs. This situation has helped us to differentiate three leadership roles: “mentors” (female role models, who participate in training and mentorship), “allies” (male trainers and facilitators), and “trainers” (support from international organizing team). Participation of each to these types of individuals is critical to develop and support our participants.

  • Keen interest from female leaders to foster the next generation of conservationists, including willingness to engage honestly in vulnerable conversations and provide career advice 
  • Growing interest from allies to support development of women in their field and organizations 
  • Funding to support attendance and honorarium for high-quality mentors and allies 
  • We have established a code of conduct and set clear expectations up-front on how mentors and allies should engage with students during and after the program 
  • Mentors and allies with a background in training as well as expertise in conservation tech are preferred 
  • Wherever possible, we seek a combination of mid-career and established mentors, who can speak to participants about different stages of the conservation career journey 
  • Male allies need to be carefully selected to create a supportive, safe environment 
  • We maintain and cultivate female-only spaces at the workshop where male allies and trainers are not allowed 
Local partners and host institutions

This program aims to equip women with practical skills that are actionable within their local context, enabling them to seize opportunities such as funding and career advancement within their specific regions. To achieve this, we collaborate closely with local partners and host institutions to adapt our core training materials, ensuring they align with local challenges, processes, and institutions. By tailoring our trainings to address the unique needs and contexts of the women we support, we maximize the relevance and impact of our programming. 

  • Local partners with aligned visions in education, upskilling, and empowerment 
  • On-the-ground support from women within the host and collaborating organizations 
  • Networks of experienced local educators and trainers in the conservation technology space  
  • Educational systems vary significantly, even across countries in the same region. For example, certain types of trainings or activities - such as active learning approaches - may be more difficult for students from countries where education is centered on rote memorization. Understanding local learning preferences and adapting teaching methods accordingly can support deeper engagement. 
  • Certain technologies or methodologies, such as drones or cloud-based data storage, may be prohibited or prohibitively expensive in some. Partnering with local conservation technology experts ensures that we focus on accessible, actionable technologies for our participants.
Community rangeland officers assessing the health of rangeland in Engaruka Valey in Northern Tanzania
East and South Africa
Neovitus
Sianga
Community rangeland officers assessing the health of rangeland in Engaruka Valey in Northern Tanzania
East and South Africa
Neovitus
Sianga
Open-source software for vulture monitoring

This building block leverages Declas, an open-source AI tool, to automate vulture monitoring. By analyzing images or videos, it detects and classifies species with high accuracy. The system eliminates manual counting, enabling scalable, cost-effective wildlife tracking. Users—researchers, rangers, or conservationists—simply upload visual data, and the tool generates real-time insights for informed decision-making. Built on YOLOv11 (Ultralytics) and trained on crowdsourced data.

  • A simple and intuitive user interface to ensure accessibility for non-technical users.
  • Documentation and training resources for users to understand and effectively utilize the application.
  • Community feedback to continually enhance the tool’s usability and features.
  • Usability is key; overly complex interfaces deter users.
  • Offering technical support and clear documentation ensures broader adoption.
  • Integration challenges included aligning the AI model’s output with user-friendly visualization tools; iterative testing was essential to resolve this.
Field data collection and validation framework

The framework ensures that the AI model is robust and generalizable across different regions and habitats. Data collected is used to test the model’s ability to recognize vulture species in diverse conditions, providing feedback for further optimization.

  • Deployment of drones and camera traps in strategic locations within reserves for optimal coverage.
  • Collaboration with local conservation teams for field logistics and data collection.
  • Consistent testing and refinement of the model based on field results to address discrepancies.
  • Having local partnerships ensures smoother field operations and enhances data collection efficiency.
  • A major challenge was dealing with low-quality or insufficient data; addressing this required setting up more camera traps in diverse locations.
AI-powered vulture species recognition model

The building block aims to automate vulture monitoring by developing a model to detect and classify four vulture species (Gyps africanus, Gyps coprotheres, Gyps rueppelli, Torgos tracheliotos) from visual data, reducing manual effort, speeding up analysis, and ensuring consistency. It leverages Google Colab Pro+ to run Python code and train the model on large image datasets, utilizing the Ultralytics package with YOLOv11 for vulture classification. Images are stored on a 2 TB Google Drive, sourced from the iNaturalist database via the rinat R package and supplemented by data from the Southern African Wildlife College and Endangered Wildlife Trust. The CVAT Team plan enables collaborative image annotation, allowing multiple users to label and export images with annotations for training and validation.

  • A high-quality, annotated dataset with diverse images representing the target species in different environments and conditions.
  • Access to computational resources (Google Colab Pro+)for training and validating the AI model.
  • Collaboration with conservationists to validate the model’s results in field conditions.
  • Ensure the dataset is representative of real-world conditions to avoid bias in detection (e.g., lighting, angles, habitats).
  • Regular updates to the model with new data improve accuracy and adaptability.
  • Challenges include misclassifications due to overlapping species traits; having experts validate initial results is essential.
Open-Source Application for Species Monitoring

This building block democratizes access to cutting-edge technology, enabling scalable and cost-effective wildlife monitoring. Users can upload images or videos, and the application automatically detects and classifies species, providing actionable insights for decision-making.

  • A simple and intuitive user interface to ensure accessibility for non-technical users.
  • Documentation and training resources for users to understand and effectively utilize the application.
  • Community feedback to continually enhance the tool’s usability and features.
  • Usability is key; overly complex interfaces deter users.
  • Offering technical support and clear documentation ensures broader adoption.
  • Integration challenges included aligning the AI model’s output with user-friendly visualization tools; iterative testing was essential to resolve this.
Field Data Collection and Validation Framework

The framework ensures that the AI model is robust and generalizable across different regions and habitats. Data collected is used to test the model’s ability to recognize vulture species in diverse conditions, providing feedback for further optimization.

  • Deployment of drones and camera traps in strategic locations within reserves for optimal coverage.
  • Collaboration with local conservation teams for field logistics and data collection.
  • Consistent testing and refinement of the model based on field results to address discrepancies.
  • Having local partnerships ensures smoother field operations and enhances data collection efficiency.
  • A major challenge was dealing with low-quality or insufficient data; addressing this required setting up more camera traps in diverse locations.
AI-Powered Vulture Species Recognition Model

The purpose of this building block is to automate the traditionally manual process of wildlife monitoring. The model works by analyzing visual data, detecting the presence of vultures, and classifying them into species with high accuracy. This reduces human effort, accelerates data analysis, and ensures consistency in monitoring.

  • A high-quality, annotated dataset with diverse images representing the target species in different environments and conditions.
  • Access to computational resources (Google Colab Pro+)  for training and validating the AI model.
  • Collaboration with conservationists to validate the model’s results in field conditions.
  • Ensure the dataset is representative of real-world conditions to avoid bias in detection (e.g., lighting, angles, habitats).
  • Regular updates to the model with new data improve accuracy and adaptability.
  • Challenges include misclassifications due to overlapping species traits; having experts validate initial results is essential.