Since 2009, Digital Green has partnered with NGOs and government agencies to improve the efficiency of existing agricultural extension activities and raise the livelihoods of smallholder farmers across the developing world by training partners to produce and share videos that are by farmers, of farmers, and for farmers.
A community video production team of 4-6 individuals in each targeted district creates videos based on our partners’ existing extension interventions, which typically involve programs like farmer field schools and demonstration plots. These videos, which typically average 8-10 minutes in length, are then shared among small groups of farmers, mostly women self-help groups, on a biweekly basis using battery-operated pico projectors. A facilitator from the community mediates a discussion around the video screenings by pausing, rewinding, asking questions, and responding to feedback. Before questions around costs and benefits, farmers are often most interested in asking questions like, “What is the name of the farmer in the video?” and “Which village is he or she from?” to see if it comes from a source that they can identify with. We have found that even showing a plastic bucket in a video can raise questions about the bucket’s price and where it can be bought even though farmers could use any vessel they have for the practice. Some farmers even adopt practices just so that they can be seen in their communities as a role model (a la “Farmer Idol”).
After each video screening, the facilitators visit farmers’ fields to check whether they’ve taken up the practices that they saw for themselves or might need some follow-up support. The usage data and feedback that these facilitators capture informs the production and distribution of each set of videos that is produced to iteratively better address the needs and interests of the communities that they work with.
We started up as a project in Microsoft Research India’s Technology for Emerging Markets group in 2006, where we found the approach could improve the efficiency of an existing extension system by a factor of ten (http://itidjournal.org/itid/article/view/322). We now work across seven states in India and parts of Ethiopia and Ghana with about 150,000 people in 2,000 villages regularly watching these videos.
We wouldn’t have been able to scale at this pace without the foundational work that our partners have done in building rapport with communities to mobilize small groups of farmers; engaging a grassroots-level cadre of trainers; developing locally relevant programs and practices through a combination of structured and informal research; and establishing physical linkages with supporting products and services, like those of banks, input providers, markets, and government schemes.
Over the years, we’ve built tools like COCO to support our partners in capturing data on the videos that individuals watch, the questions they ask, the interests that they express, and the practices that they adopt in places with limited or intermittent connectivity. We recently began working with Dimagi to do so using CommCare where possible as well. We’ve also developed several tools to visualize the content and data that we capture: (1) an analytics set of dashboards to track key performance indicators in various time and geographic dimensions, (2) a videos library that serves as a vertical search/filter layer for the >2,600 videos that have been uploaded on YouTube, and (3) a view into the history of an individual farmer’s videos watched and practices adopted plotted on a Google Map on “Farmerbook.” In response to increasing hits on our YouTube channel, we're in the process of building a Khan Academy-like site that structures the videos as curricula.
We’re currently in the midst of extending our approach to 10,000 villages in collaboration with the Government of India’s National Rural Livelihoods Mission. As we do so, we’re looking to mine the database of farmers’ interactions in watching videos, asking questions, and adopting practices. We made an initial attempt to model the diffusion of practices across this social network, however, when we asked members of the community who they considered “influential,” there wasn't a correlation between their perceptions and those that the data had suggested. Consequently, we’re currently testing the robustness of the influencers that the algorithm identifies by looking at the consistency of the results when segmenting the videos by season and crop.
We’re also working on trying to establish whether we’re only identifying early adopters, who might just be good at deciding which practices to take up, or influencers, who have an ability to affect the behavior of others in their communities based on whether they do or don’t adopt a practice. We’d like to do this in order to potentially identify patterns among influencers for two key reasons: to better determine who to feature in videos and to more appropriately target videos to the communities to whom they are shown and the facilitators who screen them.
We welcome any thoughts or pointers from others who have navigated through or have experience with this sort of targeting. Email Rikin Gandhi, CEO >