Scenario Planning for a future with AI
Yesterday in my class at the integrated innovation institute of Carnegie Mellon University it was a day of two ideas coming together possibly to envision a new space for innovation in Generative AI.
The day started with a lecture by Shyamala Pragya, Senior Software Product Manager: Deep Learning @ NVIDIA.
Shyamala was my client. We worked on a project that involved exploration of UX Design principles for conversational AI. She is one of the industry experts in this area. She has led initiatives across connected home, automotive, and wearables categories.
Shyamala’s lecture “Artificial Intelligence: Balancing Innovation and Responsibility” reflected the true state of clarity in the industry. She pointed out that while AI offers great potential for innovation and progress, it also poses significant ethical and societal challenges. Her talk explored the importance of balancing innovation and responsibility in AI development and deployment. In the class she discussed the ethical considerations that must be prioritized in AI, such as transparency, accountability, and fairness.
Unfortunately nobody knows who and how it is going to be disciplined. The genie is out of the bottle. We are sitting at the fuzzy front end of the process of unleashing of the generative AI hoping that government regulations and ethical guidelines created by the developer community will help avoid dire consequences. Shyamala gave example of a recent news reported by Belgian outlet La Libre where A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai. This created a moment of silence in the class.
One of the students had an instant reaction to Shyamala’s story- “it seems the industry has created a virus and now we are looking for an antidote to contain the virus.”
Shyamala pointed out that as a conscientious member of the tech community, she considers her responsibility to keep exploring possible future scenarios and to make her employers aware of the pitfalls, as they work on developing generative AI applications for positive outcomes. I have no doubt training developers of AI solutions with scenario planning and ethical boundary setting skills is going to be a critical need moving forward.
After Shyamala’s class I facilitated a discussion about the role of personas in product development process. We discussed the difference between quantitative segmentation studies typically conducted by the industry for market sizing versus qualitative personas used as characters and behaviors in envisioning future scenarios. I pointed out that in the background of Shyamala’s provocations we would need to engage with future scenarios development for generative AI with the same sense of urgency as scenario planning conducted by armed forces during a war.
We discussed the difference between scenario planning and persona development in B2B and B2C UX Design projects. I shared our past experience of developing persona for both the types of clients. This conversation led to a suggestion by a student that in the future we may need to envision Generative AI scenarios in which instead of considering AI as one intelligent agent- we may have to envision a community of AI agents each with their behavioral archetypes, roles, responsibilities and rules of conduct. I commented- we may be heading towards creation of a future society where real world people would be learning to live with a community of chat bots each with their own persona characteristics and cleverly defined code of conduct. How might such a society of quasi digital characters unfold- is indeed a good provocation for visioning the future at a war footing.
We may not be able to put the genie back in the bottle but we can definitely be mindful and deliberate as we nurture the chat bots of the future who are still in infant stage. I am excited to facilitate the imagination of the AnthroDesign Community and play our part in the emergence of this future.