Exploring the boundaries of Human-Generative AI Communication
These musings were spurred by the ideas shared during a panel discussion on Participatory Design and AI, in which I took part at the recent PDC2024 conference.
For years, I have advocated for participatory design as an approach that invites everyday people to engage in the design process. To this definition, I would like to add the crucial component of “designers and engineers participating in the lives of everyday people” as a fundamental aspect of the participatory design process. When engineers, designers, and business strategists make decisions from the ivory towers of corporate real estate or, even more problematic today, while working remotely from home offices, a misalignment inevitably arises between the actual needs of people and the perceived benefits of developers’ technologies or ideas. Even when people are invited to participate in the design process, there is a risk of unintentionally leading them along biased perspectives rather than genuinely understanding their lives and needs through immersion, observations and conversations.
As generative AI makes strides towards creating an Artificial General Intelligence capable of matching or surpassing human cognitive abilities, the software engineering community is understandably excited. However, with the rise of generative AI programs, concerns about AI surpassing human intelligence have emerged. Questions abound: Will AI achieve sentience? What will the human-AI relationship look like? How will developers or legislation ensure AI operates within ethical boundaries and doesn’t harm human interests?
While I cannot predict whether AI will become sentient or its potential consequences, I can offer insights into human sentience and the possible scope of human-AI communication.
Two sentient beings connect through shared thoughts, emotions, actions, and presence. Communication between them involves exchanging meaning, or “sense-making,” and requires context for proper understanding. Even silence can be interpreted through context.
Considering my framework of sentient communication, I question the extent to which AI can contribute to thinking together, feeling together, doing things together, and being together. In which of these aspects can AI make meaningful contributions?
After interacting with various generative AI programs and providing feedback to an AI-enabled platform over several months, I’ve formulated tentative hypotheses:
- Thinking Together: Current generative AI shows promise in enhancing human-AI thought exchange, but we must establish guardrails to prevent misunderstandings or misuse.
- Feeling Together: AI struggles to grasp human emotions unless expressed verbally. Although some programs can predict emotions from facial expressions, truly meaningful emotional exchanges between AI and humans seem unlikely.
- Doing Things Together: AI-guided robotics may enable shared activities, like playing games. However, the memories and rituals that emerge from shared experiences seem beyond AI’s reach, as these aspects are integral to human relationships.
- Just Being Together: It’s difficult to imagine AI participating in the profound exchanges that occur between living beings simply by being present. Can AI interpret presence or silence in context? I doubt it.
Overall, while generative AI has potential, it appears to have a long way to go before it can engage in human-like communication.
In conclusion, it is crucial for developers and decision-makers to step out of their corporate offices and gain a deeper understanding of how everyday people live, their actual needs, and how they derive meaning from communication. Rather than obsessing over applying AI to every product or perceived need, they should focus on aligning AI solutions with the genuine requirements and communication patterns of people.