Introducing Lightful’s AI Squad
When ChatGPT was first introduced, the team at Lightful spent hours diving into the world of generative AI to see what it could do. We realised in order to grasp its full potential we needed a team of cross-functional experts to focus on this emerging technology. So we assembled an internal ‘AI Squad’ composed of designers, digital coaches, engineers and product experts. As a result, we have been able to quickly innovate, iterate and prototype ideas with nonprofits, co-creating solutions to their challenges and keeping humans at the centre of the design process.
We started our product development premise from first principles - what problems are charities we work with facing, and can AI tools solve those problems?
We have been closely watching the rise of generative AI and how the nonprofit sector has felt about it since the launch of ChatGPT last December. As well as ‘talking the talk’, we also wanted to ‘walk the walk’ and understand how the technology could be integrated into the products we build for nonprofits. Naturally, as a company that supports nonprofits using digital tools, we needed to learn how to use AI ourselves, so we can teach others.
However, building a product to integrate with generative AI is not like other products we integrate with. On one level, it is similar to how we integrate with Twitter or Facebook in our social media management platform, in that we send some information via an API call and then get some other information back. The key difference, however, is that whereas we know what all the possible responses might be from a social media API (e.g. a success or fail response), the response from the GPT-4 API could be anything. And we might not know if it is a ‘good’ or ‘bad’ response to whatever question we asked it.
The fact that a generative AI tool can answer ‘anything’ also leads to the temptation to think ‘What could we use this for’? That, however, is an approach that should be resisted. You must first start with the problem, and work out a solution, not the other way around.
Our approach was to map out all of the challenges we see charities face and narrow down the problems using the opportunity solution tree approach, before starting to see how we could use AI to solve those problems. Once we had a problem space, we could use the available AI tools to see if they worked. This involved adding a new step in our regular product development cycle - prompt design.
Developing with AI and agility
As we built AI prototypes to integrate with our existing technologies, we encountered various challenges that required creative solutions. Three of these challenges stand out in our journey of innovation and learning:
- Adapting to rapidly evolving AI technologies: During the initial prototyping of our social media posts AI-feedback tool, GPT-4 was released. This new model significantly enhanced responses and feedback capabilities. Our Agile approach allowed us to incorporate changes easily, including swapping out older models for the latest versions. This flexibility was instrumental in keeping our prototype up-to-date and aligning it with the most recent advancements.
- Understanding and establishing best practices: Working with cutting-edge AI technology often means navigating uncharted territory, where established best practices may be scarce or insufficient. While standard coding practices can be adapted, they might not seamlessly align with the unique AI environment. Over time, as more professionals embraced AI, best practices emerged, and we incorporated them into our iterative development cycles. OpenAI's support in disseminating these best practices has been invaluable in steering us in the right direction.
- Overcoming the steep learning curve of AI and related technologies: Looking back to our journey in December, the transformation brought by linguistic terminology and natural language processing in the development space was unforeseen. Although developers no longer need to be experts in natural language processing due to the introduction of abstraction layers, acquiring foundational knowledge about the underlying concepts has been essential in grasping the full potential of AI.
Designing with agile practices for AI development
As we said earlier, one key challenge in designing for AI is that there are no visible rules on how an AI functions, unlike a traditional interactive system which has predictable responses based on known rules. This means that, in our design activities, we required a great deal of testing to help us create a ‘mental model’ of how the AI works given particular prompts. This mental model acted as a kind of collective knowledge that we built upon through sharing the results of our tests and the assumptions about what types of prompts and other inputs generate what type of results.
We used a number of other collaborative techniques to enhance our ability to imagine possibilities, tailor products to users’ needs, and iterate rapidly on prototypes, including:
- Human-centred design: We immersed ourselves in users' perspectives to understand their key challenges and goals, and determine whether we were actually solving the problems we set out to.
- Continuously test prompts: We encouraged all team members to test prompts to optimise AI outputs and benefit from each person’s perspective. Because of how prompts work, different phraseology can result in different outcomes, and each person’s slightly different method of communicating meant we could quickly generate a wide variety of prompts.
- Regular cross-collaborative feedback loops: We had daily standups to benefit from feedback from a variety of perspectives, which enabled us to quickly highlight and address problems and ideate on new opportunities.
- Understanding the constraints of AI: AI is profoundly useful, but it has its constraints in how it could be used, and understanding these constraints only increased the utility of our AI applications. We focussed on areas where AI could offer suggestions and feedback rather than wholesale content generation, as this was where we found its application the strongest.
- Keeping the human-in-the-loop: As with the above, we found that AI worked best as a collaborative tool for our users rather than as one to replace them wholesale. This is especially true given that we are a learning company - our AI applications prompted the user to think. In this way, our applications acted as a kind of cognitive loop moving from user to AI and back again.
How the AI squad process helped us create useful features
By working through this process, over a 10 week period, the team were able to test, prototype and build multiple features. The first - and most used so far - was a new addition to the Lightful Social Platform. With the help of GPT-4, when content is first drafted by the participant, they can click an ‘AI Feedback’ button to get three specific alternative suggestions and insights on how they might improve their social post. The insights explain the principles behind each suggestion, so people can learn best practice theory and how to apply it, instead of just following instructions without understanding the reasoning.
By deliberately sequencing the workflow in this way, Lightful helps staff and volunteers at nonprofits to be supported by the use of AI to remain authentic, empathetic and logical - the key basis of building trust.
At Lightful, we recognize the transformative potential of Agile methodology and Artificial Intelligence technology. Our journey of integration and learning is guided by our commitment to serve nonprofits worldwide. By embracing collaboration, agility, and a problem-first approach, we will create powerful AI-powered solutions that empower nonprofit professionals and contribute to their mission-driven work.
- Use the technology to see what it can do
- Get a group together to discuss generative AI, and get different perspectives on using it and potential use cases
- Start with the problems you need to solve, not the technology
- Expect to spend a lot of time testing prompts
- Test prompts with a range of inputs, both real and imagined, to understand AI constraints
- Get feedback early, get feedback often, and use humans and AI tools to do so
The persistent gender gap in digital access and skills is preventing women and young girls from unlocking technology’s full potential. Gender justice and reproductive rights organisations have been battling with a huge swell of demand for services, yet they face a severe lack of funding, resources, and digital training to strengthen their organisation and keep up with other sectors.
At Lightful we are on a mission to help nonprofits become better storytellers, communicators and fundraisers, and we believe in the transformative power of digital to help them do this. With more and more individuals turning to online platforms to connect with one another, campaign, share stories, and support the causes they care about, it’s crucial that nonprofits have a strong digital presence as a powerful tool to build trust with their audiences.
At Lightful, our commitment to supporting nonprofits in their mission to do the greatest good drives our exploration of the latest technological advancements.
Over the past few months we have seen an acceleration of news and developments about artificial intelligence (AI) and how it can potentially transform the way nonprofits are working. Last month, Lightful was delighted to host a fireside chat with smart tech experts and co-authors of The Smart Nonprofit: Staying Human-Centered in An Automated World, Beth Kanter and Allison Fine.
See who we help
Want to learn more?
Email Pumulo and start a conversation